The process of moving the system to another server

1. Preparing for database server migration

1.1. Preparing the postgresql database server in a container for backup.

  • Start the terminal.

  • Find the active container with postgresql:

docker ps -a

Let’s say this 'postgres_myinstance'.

  • Connect to the postgres container with the discovered name:

docker exec -it postgres_myinstance bash
  • Copy (just in case) the file

cp /var/lib/postgresql/data/pg_hba.conf /var/lib/postgresql/data/pg_hba.conf.bkp
  • Modify the pg_hba.conf file, adding at the end the ability to connect for replication from an address

echo "host replication all 172.27.0.0/16 trust" >> /var/lib/postgresql/data/pg_hba.conf.

The address is the address from which the new server will connect to the old server.
The example allows unauthorized connections from the 192.168.0.0/24 subnet for replication needs:

host replication all 172.27.0.0/16 trust
  • log out to the host and restart the container with postgres:

docker restart postgre_local

After the whole procedure, the old database should be secured from external connections. Extinguish it completely or eliminate the possibility of connecting to it without authorization.

2. Creating a backup copy of the database server.

Address - local address in the network allowed above for replication, port - by default 5432, specified in /var/lib/postgresql/14/era${instance}/postgresql.conf. (make sure listen_addresses is commented out or includes the network interface of interest, port is commented out or specified as 5432, otherwise you can use the one specified there).

/usr/lib/postgresql/14/bin/pg_basebackup -P -R \
  -X stream \
  -c fast -h 172.27.1.101 -p 5432 \
  -U erapgadmin \
  -D /var/lib/postgresql/14/era${instance}

This directory should eventually end up on a server with a future new database server. It can either replace the existing directory or become a new directory and a new instance. In the latter case, you will need to configure the daemon to start the new instance automatically.

3. Get the necessary values from the old version

3.1. Download the license from the previous version

  • Start the terminal.

  • In the volumes folder of the current system (selected during installation, by default /opt/era${instance}/lib/_workdir/) take the r.lic file, for example, copying it immediately to the new server using rsync -avrh --progress ./r.lic user@destination:/tmp

3.2. Download the configuration from the previous version

  • Log in to the master domain under admin, open the configuration section, open the active configuration, copy its contents into notepad.

3.3. Download or prepare to download the catalog syncroot (/opt/era${instance}/syncroot).

3.4. Download or prepare to download the catalog era_recpath (/opt/era${instance}/era_recpath).

3.5. Prepare an Incoplax installer startup string containing all the options of the previous installation. (The installation string should be saved in the project file along with other system properties useful for restoring project information).

4. Installing the Incoplax on a new server.

  • In general, the installation is performed to a different local address. In the special case, the address can be identical, in which case no configuration changes are required.

  • It is advisable, but not necessary, to keep as many of the basic installation parameters as possible, so that as few additional edits as possible are made. Server name, folder location, instance name, passwords, cookies, options. It is better to take the former installer start line with all the parameters assigned and change exactly what is necessary, i.e. the local address.

If catalogs change their locations, the corresponding changes need to be accounted for by applying the scripts from the examples.

Next we consider the option of installing the platform with a new empty database and further reconnecting to the restored database.
The ${BACKUP_DATA_FOLDER} environment variable used in the following examples contains the path to the directory where the database is backed up.

4.1. Postgres on a new server in a container.

Once the system is up, a container with postgres (postgres_myinstance, since we keep as much of the original values as possible) will appear on the server.

You can connect to the coneuner from the terminal:

docker exec -it postgres_myinstance bash

The /var/lib/postgresql/14/data folder contains the configs and data.
This directory inside the container is treated as a container volum and mounted in a folder on the host.

4.1.1. Let’s determine the location of the instance data directory in the host:

docker inspect postgres_myinstance

In the output we will find a directory with configs and data ("Source"):

...
    "Mounts": [
        {
            "Type": "volume",
            "Name": "pg_data_vol",
            "Source": "/var/lib/docker/volumes/pg_data_vol/_data",
            "Destination": "/var/lib/postgresql/data",
            "Driver": "local",
            "Mode": "z",
            "RW": true,
            "Propagation": ""
        }
    ],
...

4.1.2. Rename the directory to *.bkp (just in case for recoverability):

sudo mv /var/lib/docker/volumes/pg_data_vol/_data /var/lib/docker/volumes/pg_data_vol/_data.bkp

4.1.3. Let’s remove the recovery-attachment to the master from the catalog:

sudo rm ${BACKUP_DATA_FOLDER}/recovery.conf

4.1.4. Let’s assign an owner group to the catalog (docker)

sudo chown root:docker ${BACKUP_DATA_FOLDER}

4.1.5. Let’s move the catalog to the volumetric position

sudo mv ${BACKUP_DATA_FOLDER} /var/lib/docker/volumes/pg_data_vol/_data

4.1.6. Restart the container

docker restart postgres_myinstance

4.1.7. Let’s check the correctness of the container.
This can be done via pgadmin, also installed in the container.
Or any other way, such as via psql inside the container:

docker exec -it postgres_myinstance bash

su postgres

psql -p 5432 -c "Select pg_is_in_recovery()"

psql -p 5432 -d r_domain_master -c "Select * FROM \"user\".users"

4.2. Postgres on a new server in the host.

Once the platform is installed, the /var/lib/postgresql/14 folder with instances subfolders will appear on the server.

4.2.1. We copy the backup directory here (or immediately backup step 2 to a new subdirectory here. The target subfolder should not exist when the backup is started).

4.2.2. Make edits to the files in the config directory:

  • correct pg_hba.conf, removing unnecessary connections, in particular, the trust connection made for replication earlier.

  • edit postgresql.conf, changing the port to avoid conflicts with the default instance (uncomment the port option and assign, for example, 5442).

  • delete the recovery.conf file if it exists (for decoupling from the master server):

rm recovery.conf
  • Rename the current config, saving it just in case:

mv postgresql.auto.conf postgresql.auto.conf.bkp

4.2.3. Starting a new instance:

/usr/lib/postgresql/14/bin/pg_ctl -D /var/lib/postgresql/14/era_myinstance start

4.2.4. Let’s try to execute an SQL query to the instance:

psql -p 5442 -c "Select pg_is_in_recovery();"

It should return f. (If it doesn’t, it means that the instance started in replica mode, and it needs to be switched to master by a signal - master.signal file in the config directory - configurable in the file postgresql.conf).

4.2.5. Create an instance autorun service on startup Linux.

sudo bash -c "echo '[Unit]
Description=PostgreSQL instance era_01 service
After=network.target

[Service]
Type=forking

User=postgres
Group=postgres

OOMScoreAdjust=-1000

Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
Environment=PG_OOM_ADJUST_VALUE=0

Environment=PGSTARTTIMEOUT=270

Environment=PGDATA=/var/lib/postgresql/14/era_myinstance

ExecStart=/usr/lib/postgresql/14/bin/pg_ctl start -D ${PGDATA} -s -w -t ${PGSTARTTIMEOUT}
ExecStop=/usr/lib/postgresql/14/bin/pg_ctl stop -D ${PGDATA} -s -m fast
ExecReload=/usr/lib/postgresql/14/bin/pg_ctl reload -D ${PGDATA} -s

TimeoutSec=300

[Install]
WantedBy=multi-user.target' >> /etc/systemd/system/postgresql_12_era_myinstance.service"

Turn on the service and start the daemon:

sudo -S systemctl enable "postgresql_12_era_myinstance.service"

sudo -S systemctl daemon-reload

4.2.5. Restart the machine to make sure that the daemon is running and the instance is automatically started on startup sudo reboot

4.2.6. Making sure the instance is running

su postgres

psql -p 5442 -c "Select 1;"
Similar to 4.1, you don’t have to create a new instance, but you can swap the directory from an existing instance.

4.3. Installation option with connection to a new restored database.

  • This option involves pre-installation of the database server - either manually or by means of the platform installer before the database installation stage and then terminating the process.

  • Then the main instance (directory with configs and data) is replaced or a new instance is added and accordingly the service is manually started and configured for autorun at machine startup.

  • A subsequent installation of the platform utilizes the existing database.

5. Setting up a new platform.

Connect to the new platform instance via the web interface. By default, it’s

Downloading a license

The license file downloaded in step 3.1. can be uploaded:

  • or under the master domain administrator account directly in the Settings application.

  • or by placing/copying the r.lic file from the old system /opt/era_instance/lib/mdc1@172.27.1.101/lic/r.lic to the new one in the same location.

Copying a catalog syncroot

The syncroot directory (item 3.3.) can be copied directly from the previous server, or from intermediate storage:

rsync -avrh --progress era@172.27.1.101:/opt/era_myinstance/syncroot /opt/era_myinstance/syncroot

Copying a catalog era_recpath

The era_recpath directory (clause 3.4.) can be copied directly from the previous server, or from intermediate storage:

rsync -avrh --progress era@172.27.1.101:/opt/era_myinstance/era_recpath /opt/era_myinstance/era_recpath

Connecting a new database

Connection of a new database is made by changing the configuration.

Configuration

The installation phase of the new system yielded two configuration files:

  • from the new facility;

  • from the old system.

The task of this stage is to merge them. Thus, in general, it should be the configuration of the former system, but in which they should be corrected:

  • Postgresql master domain database connection string;

  • Server addresses;

  • Ports (only if changed from the previous installation);

  • Server name (only if changed from the previous installation);

  • Paths to directories within the container (only if changed from the previous installation).

Custom complex configuration settings may also contain special values that require adjustment when creating a new merge configuration.
You should search the file for any references to the previous value server address, as well as any other values that were intentionally changed during the new installation.

After saving the result as a new configuration, it will be checked by the validator, and if successful, it will be displayed with the status "Correct".
At that point, you can activate it.

System restart, status check

Check the system status (Master Domain, Settings → System → Status).
Wait for the system to return to the correct state without errors or warnings.

Perform a restart of the server:

sudo reboot

Ensure that after restarting the server, the platform is routinely loaded and is not logging errors, that is:

  • to log into the master domain,

  • question the condition,

  • make sure that the status does not contain any problem notifications,

  • make sure the working domains are in place,

  • make sure that communication scripts are opened, nested static sound files are available,

Installing the product layer

This is done according to the usual procedure.
The custom packages used on the system are then loaded and activated in builder.