Installation
Database Servers
PostgreSQL Installation
This section describes how to install PostgreSQL on a RHEL 8.x Linux environment. If your environment differs, refer to the official installation instructions at https://www.postgresql.org/.
Install the repository RPM:
[root@core-db1 ~]# dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Disable the PostgreSQL package from the standard OS repository:
[root@core-db1 ~]# dnf -qy module disable postgresql
Install PostgreSQL along with repmgr:
[root@core-db1 ~]# dnf install -y postgresql15-server repmgr_15
To install the database in the specific directory /data/db
instead of the default location, the PGDATA
environment variable, which PostgreSQL uses to locate its datastore, must be overrided.
TODO ensure /data/db and /data/backup is owned by user postgres
Override parameters:
[root@core-db1 ~]# systemctl edit postgresql-15.service
Set its content to:
[Service]
Environment=PGDATA=/data/db
Reload the service configuration:
[root@core-db1 ~]# systemctl daemon-reload
The base configuration and override can be verified with:
[root@core-db1 ~]# systemctl cat postgresql-15.service
If the /data/db
directory already exists (e.g., as a dedicated filesystem mount point), change its owner to the postgres
user:
[root@core-db1 ~]# chown postgres:postgres /data/db
The DB can now be initialized with the command:
[root@core-db1 ~]# /usr/pgsql-15/bin/postgresql-15-setup initdb
Initializing database ... OK
By listing the files in the /data/db
it's possible to verify the directory structure created, owned by user postgres
:
[root@core-db1 ~]# ls -l /data/db/
total 56
drwx------. 5 postgres postgres 33 Mar 7 12:09 base
drwx------. 2 postgres postgres 4096 Mar 7 12:09 global
...
Modify the database configuration by editing the file /data/db/postgresql.conf
:
listen_addresses = '*' #listen on all addresses
wal_level = replica #generate WAL segments suitable for replication
wal_keep_size = 3200 #limit WAL segments directory to 3200 MB (200 x 16 MB segments)
max_wal_senders = 10 #number of replication connections
hot_standby = on #allow queries on standby
Modify the access rights to allow authenticated connections over the network by appending the following entry to the end of /data/db/pg_hba.conf
:
host all all 0.0.0.0/0 scram-sha-256
TIP
You can restrict network access by specifying a subnet instead of allowing all IPs. Replace 0.0.0.0/0
with the appropriate subnet for your environment.
Enable PostgreSQL start on boot:
[root@core-db1 ~]# systemctl enable postgresql-15.service
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql-15.service -> /usr/lib/systemd/system/postgresql-15.service.
Start PostgreSQL:
[root@core-db1 ~]# systemctl start postgresql-15.service
Verify its status:
[root@core-db1 ~]# systemctl status postgresql-15.service
* postgresql-15.service - PostgreSQL 15 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-15.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/postgresql-15.service.d
`-override.conf
Active: active (running) since Fri 2025-03-07 13:44:06 CET; 2min 33s ago
Docs: https://www.postgresql.org/docs/15/static/
Main PID: 4347 (postmaster)
Tasks: 7 (limit: 23632)
Memory: 19.4M
CGroup: /system.slice/postgresql-15.service
|-4347 /usr/pgsql-15/bin/postmaster -D /data/db
|-4348 postgres: logger
|-4349 postgres: checkpointer
|-4350 postgres: background writer
|-4352 postgres: walwriter
|-4353 postgres: autovacuum launcher
`-4354 postgres: logical replication launcher
Mar 07 13:44:06 core-db1 systemd[1]: Starting PostgreSQL 15 database server...
Mar 07 13:44:06 core-db1 postmaster[4347]: 2025-03-07 13:44:06.258 CET [4347] LOG: redirecting log output to logging collector process
Mar 07 13:44:06 core-db1 postmaster[4347]: 2025-03-07 13:44:06.258 CET [4347] HINT: Future log output will appear in directory "log".
Mar 07 13:44:06 core-db1 systemd[1]: Started PostgreSQL 15 database server.
Database Setup
Once the database is installed and started, create the database to store the application data.
Generate a random password that will be used to authenticate the application:
[postgres@core-db1 ~]$ openssl rand -base64 12
Swith to user postgres
and connect to the DB:
[root@core-db1 ~]# su - postgres
[postgres@core-db1 ~]$ psql
Create the apio_core
user, replacing <password>
with the one previously generated:
postgres=# CREATE ROLE apio_core LOGIN PASSWORD '<password>';
CREATE ROLE
Create the database apio_core
, owned by user apio_core
and set the databse timezone to UTC:
postgres=# CREATE DATABASE apio_core OWNER apio_core;
CREATE DATABASE
postgres=# ALTER DATABASE apio_core SET TIMEZONE TO 'UTC';
ALTER DATABASE
The database is now created:
apio_core=# \l+ apio_core
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges | Size | Tablespace | Description
-----------+-----------+----------+-------------+-------------+------------+-----------------+-------------------+---------+------------+-------------
apio_core | apio_core | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | | 7533 kB | pg_default |
(1 row)
Backups Setup
If /data/backup
is mounted as a dedicated filesystem, change its owner to the postgres user to ensure proper access for database backups:
[root@core-db1 ~]# chown postgres.postgres /data/backup
Then, add the following crontab file /etc/cron.d/apio-db with the content:
#remove backups older than 14 days at 02:00 on Sundays
0 2 sun * * postgres find /data/backup/ -type d -name "basebackup*" -ctime +14 -prune -exec rm -rf {} \;
#perform backup at 03:00 on Sundays
0 3 sun * * postgres pg_basebackup -F t -z -X fetch -D /data/backup/basebackup-`hostname`-`date -I`/
Each backup will be stored in a dedicated directory, following the format /data/backup/basebackup-<hostname>-<date>
.
TIP
To adjust the backup frequency and retention according to your needs, you can modify the crontab file. For instance, if you want to create backups more or less frequently or change how long backups are retained, you can modify the cron schedule and the retention logic accordingly. The compression ratio is around 5:1, meaning that the backup size will be about 1/5th the size of the database. To estimate the actual backup size, you can use the following:
[root@core-db1 ~]# du -hs /data/db
361M /data/db
You can use this 5x compression ratio as a guideline to determine how much disk space you will need for backups and plan accordingly.
Replication Setup
Unless deploying in a lab environment, it is recommended to set up two database servers with replication. In this setup, one server acts as the master, while the other operates as a standby, continuously replicating data from the master.
To configure a redundant database, install a second database server following the same steps as the first one, then proceed with the instructions in the next chapters to configure and initialize replication.
Replication is configured using the repmgr
tool.
Generate a random password that will be used to authenticate the repmgr
user.
On the first database server, switch to the postgres
user and create a new user and database for the repmgr
tool, supplying the password previously generated:
[root@core-db1 ~]# su - postgres
[postgres@core-db1 ~]$ createuser -s repmgr -P
Enter password for new role:
Enter it again:
[postgres@core-db1 ~]$ createdb repmgr -O repmgr
Enable replication access by adding the following lines to /data/db/pg_hba.conf
, replacing the IP addresses with those of the two database servers:
local replication repmgr trust
host replication repmgr 127.0.0.1/32 trust
host replication repmgr <ip address of core-db-1>/32 trust
host replication repmgr <ip address of core-db-2>/32 trust
Reload the new access rights configuration by restarting the database:
[root@core-db1 ~]# systemctl reload postgresql-15
On the first DB server, edit the file /etc/repmgr/15/repmgr.conf
and set the following parameters, replacing <ip address of core-db-1>
with the actual IP address of the master server:
node_id=1
node_name='core-db-1'
conninfo='host=<ip address of core-db-1> dbname=repmgr user=repmgr'
data_directory='/data/db'
WARNING
The conninfo
parameter must be a valid IP address that is reachable by the node itself and the other nodes for data replication. It cannot be set to localhost
(127.0.0.1). The dbname
and user
settings must correspond to the repmgr
database name and user created earlier. Ensure that the conninfo
points to an IP address that is accessible from all nodes in the replication setup.
On the standby server, set the following parameters in /etc/repmgr/15/repmgr.conf
, ensuring node_id
is set to 2
:
node_id=2
node_name='core-db-2'
conninfo='host=<ip address of core-db-2> dbname=repmgr user=repmgr'
data_directory='/data/db'
Create the .pgpass
file in the postgres
user's home directory with the following content, replacing the IP addresses with the actual IP addresses of the two servers and <password>
with the generated password:
<ip address of core-db-1>:5432:repmgr:repmgr:<password>
<ip address of core-db-2>:5432:repmgr:repmgr:<password>
Ensure the file has the correct permissions:
chmod 600 .pgpass
Then, copy this .pgpass
file to the second DB server to the same location in the postgres
user's home directory.
The replication is now configured and can be initialized.
On the first DB server, as user postgres
, register the node as the master node using the following command:
[postgres@core-db1 ~]$ /usr/pgsql-15/bin/repmgr master register
INFO: connecting to primary database...
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
NOTICE: primary node record (ID: 1) registered
The status of the cluster can be checked with the following command:
[postgres@core-db1 ~]$ /usr/pgsql-15/bin/repmgr cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------+---------+-----------+----------+----------+----------+----------+-------------------------------------------
1 | core-db-1 | primary | * running | | default | 100 | 1 | host=10.0.10.71 dbname=repmgr user=repmgr
The second DB server must now be initialized as a standby server and replicate from the master node.
First, as root, stop the database:
[root@core-db2 ~]# systemctl stop postgresql-15
As user postgres
, remove the existing data:
[postgres@core-db2 ~]$ rm -rf /data/db/*
Then, as user postgres
, clone the master DB with the following command:
[postgres@core-db2 ~]$ /usr/pgsql-15/bin/repmgr -h <ip address of core-db-1> -U repmgr -d repmgr standby clone
NOTICE: destination directory "/data/db" provided
INFO: connecting to source node
...
NOTICE: standby clone (using pg_basebackup) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example: pg_ctl -D /data/db start
HINT: after starting the server, you need to register this standby with "repmgr standby register"
As root, start the DB:
[root@core-db2 ~]# systemctl start postgresql-15
As user postgres
, register the standby node:
[postgres@core-db2 ~]$ /usr/pgsql-15/bin/repmgr standby register
INFO: connecting to local node "core-db-2" (ID: 2)
WARNING: database connection parameters not required when the standby to be registered is running
DETAIL: repmgr uses the "conninfo" parameter in "repmgr.conf" to connect to the standby
INFO: connecting to primary database
WARNING: --upstream-node-id not supplied, assuming upstream node is primary (node ID: 1)
INFO: standby registration complete
NOTICE: standby node "core-db-2" (ID: 2) successfully registered
Finally, as user postgres
, check the cluster status from either of the 2 nodes:
[postgres@core-db2 ~]$ /usr/pgsql-15/bin/repmgr cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------+---------+-----------+-----------+----------+----------+----------+-------------------------------------------
1 | core-db-1 | primary | * running | | default | 100 | 1 | host=10.0.10.71 dbname=repmgr user=repmgr
2 | core-db-2 | standby | running | core-db-1 | default | 100 | 1 | host=10.0.10.73 dbname=repmgr user=repmgr
APIO Core Servers
The APIO Core application is distributed through two methods: as container images for deployment via Docker, or as RPM packages for direct installation on the operating system. Today, container-based deployment is the preferred method, as it ensures application isolation and consistency.
Docker Installation
Install the DNF plugins to manage the Docker repository:
[root@core-app-1 ~]# dnf -y install dnf-plugins-core
Add the official Docker repository:
[root@core-app-1 ~]# dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
Adding repo from: https://download.docker.com/linux/rhel/docker-ce.repo
Install Docker long with the Compose plugin:
[root@core-app-1 ~]# dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enable automatic start on boot:
[root@core-app-1 ~]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service -> /usr/lib/systemd/system/docker.service.
Start Docker:
[root@core-app-1 ~]# systemctl start docker
Verify its status:
[root@core-app-1 ~]# systemctl status docker
* docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2025-03-07 14:12:47 CET; 6s ago
Docs: https://docs.docker.com
Main PID: 15817 (dockerd)
Tasks: 8
Memory: 25.4M
CGroup: /system.slice/docker.service
`-15817 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
...
Containers Setup
When the Core application is deployed using containers, the setup consists of several services:
- NGINX service: Acts as a reverse proxy, routing traffic to different Core instances and serving the Core GUI static files.
- Core main service: Exposes the workflow engine and serves as the backend for the Core GUI.
- Core scheduler service: Responsible for executing workflows based on the configured schedule.
- Core proxy instances: These optional instances proxy requests to one or more BWGW instances.
The NGINX service is configured to proxy multiple URL paths in a best-matching way. At a minimum, it serves the static files and proxies the Core workflow engine and GUI backend. Depending on the setup, it may also proxy certain API endpoints to one or more Core proxy instances, as shown in the diagram below:
A new docker-compose.yml
file that defines the various services required for the core application must be created.
First create a new directory under /opt to store all the configuration elements:
/ # mkdir -p /opt/apio_core
/ # cd /opt/apio_core
A new docker-compose.yml
file must be created, consisting of several services that are detailed in the following chapters.
Common Options for All Services
Certain configuration options apply uniformly to all services defined in the next sections:
restart: always
– Ensures that services automatically restart in case of failure.logging
– Configured with the Syslog driver to forward all console logs to the host's Syslog daemon. Thetag
option ({{.Name}}
) prefixes log entries with the service name, making it easier to identify logs per service.
These options will be included in the docker-compose.yml
file for each service, as shown below.
restart: always
logging:
driver: syslog
options:
tag: "{{.Name}}"
NGINX Service
The first service to be included in the configuration file is NGINX, which will serve as a reverse proxy for routing traffic to the various instances of the Core application that will be described in the following sections.
The first step is to create the NGINX configuration file on the host to define proxying rules for various API endpoints across different Core instances. Additionally, NGINX will serve the static files required for the Core GUI.
The configuration consists of several location
directives:
- Serving the GUI static files
- Proxying the
/api/
base URL to the main instance by default - Proxying the
/api/v01/proxy/
base URL to the proxy instance - Proxying the
/health
API to the main instance
TIP
The /health
URL can be used by external load balancers or reverse proxies to perform a health check on the APIO Core server.
Create a new configuration file nginx.conf
under /opt/apio_core
with this content. Adapt the number and name of proxies according to your requirements.
WARNING
Ensure that the proxy_pass URL matches the Core proxy service name, exposed port and URL mount point defined later on in the Core proxy service.
server {
listen 80;
server_name _;
client_max_body_size 15M; # needed to upload big files
access_log /var/log/nginx/nginx_access.log;
error_log /var/log/nginx/nginx_error.log;
location / {
root /www/;
try_files $uri $uri.html /index.html;
}
location /api/ {
proxy_pass http://core:5000/api/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/v01/p1/ {
proxy_pass http://p1:5000/api/v01/p1/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The port 80 is exposed so that the application is reachable from outside.
Several volumes will be mapped as follows:
static-files
will store all the necessary files required to serve the Core GUI.- The local configuation file
./nginx.conf
will contain the NGINX configuration file. This binding allows for easy modification and persistence of the NGINX configuration on the Docker host. - The NGINX log directory will be mapped to the
/var/log/apio_core
directory on the Docker host, ensuring logs are stored outside the container.
The service is marked as depending on the other Core instances to ensure that it is started after them.
Here is an example service configuration section for the NGINX service. This example includes one Core instance for the GUI (service core
) and one Core instance for proxied requests (service p1
)
nginx:
image: nginx:latest
restart: always
ports:
- "0.0.0.0:80:80"
volumes:
- static-files:/www/
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- /var/log/apio_core:/var/log/nginx
depends_on:
- core
- p1
logging:
driver: syslog
options:
tag: "{{.Name}}"
Common Options for All Core Services
Several environment variables are defined:
DB
defines the connection string to the Postgres master database, e.g.DB=postgres://username:password@host:5432/apio_core
.VIRTUAL_ENV
set to/opt/apio_core
The host's Docker socket (/var/run/docker.sock
) is mapped into the container, enabling the workflow engine to spawn container-based integrations when required.
These environment variables and volume will be included in the docker-compose.yml
file for each Core service, as shown below.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
Core Main Service
The next service to include in the configuration is the main Core application instance, which runs the workflow engine and serves as the backend for the GUI.
Ensure the desired version is included at the end of the official image URL, e.g. docker.bxl.netaxis.be/apio_bsft/core:2.16
The Core application is run with several command-line arguments:
-workflows
to enable the workflow engine, exposing workflow API endpoints-port=5000
to listen on port 5000, within Docker-host=0.0.0.0
to bind on any IP address, within Docker-cleanup
to ensure old DB records are cleaned up regularly, according to retention configuration-metrics
to expose the Prometheus metrics API endpoints
The port 9090 is exposed for the Prometheus metrics.
Volumes will be mapped as follows:
static-files
, will store all the necessary files required to serve the Core GUI.
Here is an example service configuration section for the main core application instance:
core:
image: docker.bxl.netaxis.be/apio_bsft/core:2.15.2
command: /usr/local/go/server -workflows -port=5000 -host=0.0.0.0 -cleanup -metrics
restart: always
ports:
- "0.0.0.0:9090:9090"
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
volumes:
- static-files:/usr/local/www/
- /var/log/apio_async:/var/log/apio_async
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: syslog
options:
tag: "{{.Name}}"
TIP
Note that the Core GUI static files are included within the Core image itself. These files are mapped to the named volume static-files
. Since the NGINX service also binds this volume, it gains visibility into the Core filesystem, allowing it to serve the static files directly.
Core Scheduler Service
To execute workflows according to the configured schedule, an additional instance of the Core application must be deployed. This scheduler
process will be specifically responsible for running scheduled jobs.
Here is an example configuration section for the scheduler:
scheduler:
image: docker.bxl.netaxis.be/apio_bsft/core:2.15.2
command: /usr/local/go/scheduler
restart: always
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: syslog
options:
tag: "{{.Name}}"
Core Proxy Service
Finally, if the Core application is deployed alongside the BroadWorks Gateway (BWGW), a proxy instance must be configured for each proxy or gateway defined in the Core GUI.
The Core proxy instance is run with several command-line arguments:
-workflows
to enable the workflow engine, exposing workflow API endpoints-port=5000
to listen on port 5000, within Docker-host=0.0.0.0
to bind on any IP address, within Docker-proxy
to define the mapping between the proxied API base URL, in the format/api/v01/<proxy-endpoint>:<gateway>
-runMigration=false
to prevent the application from attempting to migrate the DB schema during software updates (this task is left to the main instance).
WARNING
The gateway name must exactly match the name defined in the Core GUI.
This service is repeated for each proxy endpoint/gateway configured.
Here is an example configuration section for a proxy instance named p1
proxying the base URL /api/v01/p1
to the gateway bwks
:
p1
image: docker.bxl.netaxis.be/apio_bsft/core:2.15.2
command: /usr/local/go/server -workflows -port=5000 -host=0.0.0.0 -proxy=/api/v01/p1:bwks -runMigration=false
restart: always
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: syslog
options:
tag: "{{.Name}}"
Example Configuration File
The final docker-compose.yml
configuration file is created by assembling the services described in the previous chapters under the services
section. The static-files
volume must also be declared. Below is an example configuration with a single proxy instance. Create this file in the /opt/apio_core
directory, adapting it to your requirements.
services:
nginx:
image: nginx:latest
restart: always
ports:
- "0.0.0.0:80:80"
volumes:
- static-files:/www/
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- /var/log/apio_core:/var/log/nginx
depends_on:
- core
- p1
logging:
driver: syslog
options:
tag: "{{.Name}}"
core:
image: docker.bxl.netaxis.be/apio_bsft/core:2.15.2
command: /usr/local/go/server -workflows -port=5000 -host=0.0.0.0 -cleanup -metrics
restart: always
ports:
- "0.0.0.0:9090:9090"
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
volumes:
- static-files:/usr/local/www/
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: syslog
options:
tag: "{{.Name}}"
scheduler:
image: docker.bxl.netaxis.be/apio_bsft/core:2.15.2
command: /usr/local/go/scheduler -pprof
restart: always
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: syslog
options:
tag: "{{.Name}}"
p1:
image: docker.bxl.netaxis.be/apio_bsft/core:2.15.2
command: /usr/local/go/server -workflows -port=5000 -host=0.0.0.0 -proxy=/api/v01/p1:bwks -runMigration=false
restart: always
environment:
- DB=postgres://postgres:postgres@db:5432/apio_core
- VIRTUAL_ENV=/opt/apio_core
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: syslog
options:
tag: "{{.Name}}"
volumes:
static-files:
Logging Setup
Since the services are configured to use the host's syslog daemon, log files are stored according to the default rsyslog configuration.
To facilitate debugging, create a custom rsyslog configuration file for APIO to ensure that different services log to dedicated files under /var/log/apio_core/
. This is achieved by leveraging the programname
, set by the logging directive in docker-compose.yml
, to identify APIO Core logs and direct them to specific log files.
Create a new file, /etc/rsyslog.d/apio.conf
with this configuration:
$FileCreateMode 0644
$template DynFile,"/var/log/apio_core/%programname%.log"
if $programname contains 'apio_core' then \
?DynFile
&stop
TIP
You may forward logs to an external Syslog receiver using the directive *.* @host
for UDP transport or *.* @@host
for TCP transport.
Finally, restart rsyslog to load the new configuration:
[root@core-app-1 apio_core]# systemctl restart rsyslog
Create a new file /etc/logrotate.d/apio
with the following content to set up daily log rotation and keep 14 days of logs:
/var/log/apio_core/apio_core*.log
{
daily
rotate 14
compress #compress old log files
delaycompress #delay compression to next rotation
missingok #do not issue error message if file is missing
}
/var/log/apio_core/*access.log
/var/log/apio_core/*error.log
{
daily
rotate 14
compress #compress old log files
delaycompress #delay compression to next rotation
missingok #do not issue error message if file is missing
sharedscripts #run postrotate script once for all log files
postrotate #run script to notify application of log file rotation
docker compose -f /opt/apio_core/docker-compose.yml exec nginx nginx -s reload
endscript
}
TIP
You can check that the log rotation works as expected by running:
logrotate -f /etc/logrotate.d/apio
Then, validate the rotation by listing the rotated log files:
ls -lrt /var/log/apio_core/
First Start
Once the docker-compose.yml
file is configured and ensuring that PostgreSQL is running on the database servers, the APIO Core application can be started. From the /opt/apio_core
directory, bring the application up in daemon mode:
[root@core-db1 apio_core]# docker compose up -d
[+] Running 6/6
✔ Network apio_core_default Created 0.2s
✔ Volume "apio_core_static-files" Created 0.0s
✔ Container apio_core-scheduler-1 Started 0.5s
✔ Container apio_core-p1-1 Started 0.5s
✔ Container apio_core-core-1 Started 0.7s
✔ Container apio_core-nginx-1 Started 0.9s
The logs of the core main service should show up a successful connection to the DB:
[root@core-app-1 apio_core]# docker compose logs core
...
core-1 | time=2025-03-07T15:19:26Z level=info msg=connecting the database ...
core-1 | time=2025-03-07T15:19:26Z level=info msg=database connected with max 4 connection in the pool
core-1 | time=2025-03-07T15:19:27Z level=info msg=1/u create_users_table (414.663729ms)
...
Execute the following command to create the first super-user, with username apio. A random password will be generated and will not be shown again.
[root@core-app-1 apio_core]# docker compose exec -it core /usr/local/go/server -newsuperuser apio
INFO[0000] starting the server... build=20250228140245 commit_id=5473e1cd8d950de6e48452309c087cefdb25e365 version=2.15.2
INFO[0000] connecting the database ...
INFO[0000] database connected with max 4 connection in the pool
INFO[0000] no database migrations to be applied
INFO[0000] 0 config entries protected in 9.18628ms
INFO[0000] protecting documents...
INFO[0000] 0 documents protected in 188.417µs
INFO[0000] protecting user tokens...
INFO[0000] 0 user tokens protected in 879.562µs
user apio upserted with ID 1 and password: 'Nle[bp^FCXblLXn'
write down this passsword, it cannot be recovered!
TIP
This command can be run again to reset the password for that super-user.