Installation on slave server using the installer
Overview
The system is installed from a local machine via an ssh connection to the server, if the available server address matches the address where the system is to be deployed.
cd /tmp/debian_ubuntu bash install_ssh.sh
In other cases, when the local address of the server is not accessible from the outside, you should first unzip and copy the installer to the server, then connect via ssh and run the script.
rsync -avrh --progress -e "ssh -p PORT" /tmp/debian_ubuntu USER@HOST:/tmp ssh USER@HOST -p PORT cd /tmp/debian_ubuntu/era bash era.sh install
Algorithm of the installation process
1) Run the installation script (on the local machine install_ssh.sh
or on the server era/era.sh
).
2) Enter the server address and login, wait for files to be copied to the server if the script is running on a local machine.
Enter remote server IP-address: 192.168.0.116 Enter remote server login: admin
3). Specify whether the server has Internet access.
Should installer apt-update and download packages from internet? (Y/n):
If there is no internet access on the server, in this case the installer will point to packages that are not present on the server but must be installed.
If the list is not empty, you should abort the installation process and use package installer.
Make sure that following containers are properly installed: Proceed installation? (y/N):
4) Specify the postfix to install.
Instance name postfix (recommended: _01, _02, etc; default: empty):
Several different instances of the system can be installed simultaneously on a server in different docker containers.
This postfix will be present in the container name, in the PostgreSQL DBMS instance name (if installed here), and in the file system directory names.
The same postfix should be reported as on the master server of the same configuration.
5) Specify a default directory to host the data (docker container volumes, PostgreSQL data directory, NFS folder, etc)
Data path (default: /opt):
6) Select in which mode the installation is performed. In this case - slave.
Are you installing master server (y/n)?: n
7) Specify the addresses of the infrastructure microservices of the deployed cluster.
Enter mic address (example: mic@1.2.3.4,mic1@1.2.3.4,ic1@1.2.3.4): mic@192.168.0.115,mic1@192.168.0.115,ic1@192.168.0.115
Addresses should be entered using commas without spaces. In the example, the master server is deployed on server 192.168.0.115, it will attempt to connect to three nodes in sequence. |
8) Select a server name.
Participates in the creation of a distributed configuration by pointing specifically to this server.
By default, the machine hostname is suggested, but any unique value that is not repeated on other servers can be specified.
Enter server name for configuration (default: box-116): srv02
9) Select whether to use a proxy server to access the Internet during installation.
In case of using a proxy, you will need to specify the address (name) of the http and https server.
Should installer use http/https proxy (y/N)?:
10) Wait for docker packages to be installed (if you selected the Internet-accessible installation mode).
11) Specify whether it is necessary to install a DBMS instance PostgreSQL.
Install PostgreSQL? (Y/n): y
If the system will connect to external already existing PostgreSQL servers or farms, you do not need to install a DBMS instance. Instead, it will be necessary to specify connection string parameters to one of the PostgreSQL servers, and specify connection strings to additional servers - replicas or doublers - when setting up the configuration.
In a multi-server deployment with PostgreSQL, DBMS instances should be installed on only two selective servers according to the deployment plan. The first one is the master server, the second one is any other server.
12) If the mode with PostgreSQL installation is selected:
12.1) specify passwords for users postgres
, erapgadmin
, era_replica
.
Enter postgreSQL user 'postgres' password: (default: ********): Enter postgreSQL user 'erapgadmin' password: (default: ********): Enter postgreSQL user 'era_replica' password: (default: ********):
The same values should be set as were set when installing on the master server.
12.2) Specify that replication is required (more than one server)
Do you need replication? (y/n): y
12.3) Select the path for the instance data directory PostgreSQL
Enter postgres instance/cluster data folder (default: /var/lib/postgresql/12/era_test):
12.4) Select the path for the instance backup directory PostgreSQL.
Enter postgres instance/cluster backup folder (default: /tmp/pg_backups/era_test):
12.5) Select the installation mode - recovery.
Select PostgreSQL master/recovery install mode (m/r): r
For the first server it is necessary to specify master, for others recovery.
12.6) You must specify the address and port of the server where the master instance is located and accessible.
Enter pgSql master IP address: Enter pgSql port (default 5441):
12.7) The password for the user era_replica
must be entered. It was set when the system was installed on the master server, and now it is necessary to enter it for authorization. The script prompts for the password entered at the step 11.1.
-> Loading backup... Enter era_replica's db password (enter: *********):
12.8) Set a password for the user postgres
:
Now setup password for system user 'postgres' (it would be used for ssh):
12.9) If installation into non-standard mounted partitions was used, it is necessary to specify the name of the group where the postgres
user should be included so that he/she has access to work with the DBMS data placement directory.
In the server preconfiguration instructions for this case, a group named storage
is given in the example. It should be specified here.
If no groups for access to the mounted partition have been created, you should create and assign them in the parallel terminal according to the instructions given in the tooltip.
Ensure postgresql data folder root and permissions Enter group name to add postgres into. Leave blank if no group need:
12.10) Wait for the PostgreSQL instance to be installed and proceed to the next step.
12) Configuring working directories (container volumes)
12.1) Specify the directory path in the host to place the volume under the supervisor service directory. It is recommended to leave the default value.
/etc/supervisor/conf.d - is folder of system info. Enter SUPV volume dir for /etc/supervisor/conf.d (default: /opt/era_test/supv):
12.2) Specify the directory path in the host to place the volume under working directories.
/var/lib/era - is folder of working directories (recording, mixing, mnesia). Enter LIB volume dir for /var/lib/era (default: /opt/era_test/lib):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
12.3) Specify a directory path in the host to place the volume under the log log directories.
/var/log/era - is folder of logging. Enter LOG volume dir for /var/log/era (default: /opt/era_test/log):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
12.4) Specify a directory path in the host to place a volume under the temporary file directories.
/var/lib/era_files/local - is folder for local files (webserver and script-machine temporary files, file-server attachments). Enter volume dir for /var/lib/era_files/local (default: /opt/era_test/local):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
12.5) Specify a directory path in the host to place a volume under directories with temporary files of conversation recordings by role MG.
/var/lib/era_files/rectemp - is folder for temp records of MG. Enter volume dir for /var/lib/era_files/rectemp (default: /opt/era_test/rectemp):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
12.6) Specify a directory path in the host to place the volume under a directory with a long-lived log storage.
/var/log/era_logstore - is folder of storage. Enter LOGSTORE volume dir for /var/log/era_logstore (default: /opt/era_test/logstore):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
12.7) Specify a directory path in the host to place a volume under a directory with storage that is synced automatically between all servers.
/var/lib/era_files/syncroot - is folder for files shared by notify-sync. Enter LOGSTORE volume dir for /var/lib/era_files/syncroot (default: /opt/era_test/syncroot):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
12.8) Specify the path to directories in the host to allow moving files from one disk to another. It is necessary to specify 3 directories, it is recommended to specify directories from different disks.
/var/lib/era_files/a /var/lib/era_files/b /var/lib/era_files/c Enter volume dir for /var/lib/era_files/a (default: /opt/era_test/a): Enter volume dir for /var/lib/era_files/b (default: /opt/era_test/b): Enter volume dir for /var/lib/era_files/c (default: /opt/era_test/c):
The default value should be changed if the sweep plan calls for placement in a specific mounted partition.
13) Configuring client NFS folders (placing volumes under container directories in the host)
13.1) Specify the directory path in the host to place the volume under the default conversation record storage.
RECPATH - is folder of storage. It is used to store records for a long time (alternative to S3 external storage). This volume could be non-fast. Recommended to mount it previously to reliable file storage. Enter RECPATH NFS-client volume dir for /var/lib/recpath (default: /opt/era_test/era_recpath):
13.2) Specify a directory path in the host to place a volume under an intra-site network folder accessible to all servers in the current site (siteshare).
SITESHARE - is folder of NFS sync. It is used in arbitrary project cases for interaction within site. Recommended to mount it previously to reliable file storage. Enter NFS-client volume dir for /var/lib/siteshare (default: /opt/era_test/era_siteshare):
13.3) Specify a directory path in the host to host the volume under a global network folder accessible to all servers in the cluster at all sites (globalshare).
GLOBALSHARE - is folder of NFS sync. It is used in arbitrary project cases for interaction within all sites. Recommended to mount it previously to reliable file storage. Enter NFS-client volume dir for /var/lib/globalshare (default: /opt/era_test/era_globalshare):
14) Refuse to use the current server as an NFS server.
Do you want to setup as NFS server (y/N)?: N
15) Select which address should be used as the NFS server. Here you should set the same value as when setting the master server.
Enter NFS server address (empty to skip setup)?:
When installing a multi-site system, the mount directories must be configured more finely than the installer script provides. In particular, the siteshare and globalshare. |
16) Wait for the docker image to load.
17) Specify a cookie that matches the cookie set when the master server was installed:
PSK cookie: (default: *********):
18) Wait for the container to start
Era. Installation success!
By the current step, a configuration that includes this server must be created and activated. If the distributed configuration on the master server has not yet been created and activated, the script will not be able to terminate and will wait for the server to appear in configuration, periodically synchronizing the current configuration from the infrastructure microservices specified in step 7. You can abort the installation, but then the next step will be skipped. However, the process of connecting the server to the system will be completed in the background when the new configuration including this server is activated. It is also possible in parallel mode, without closing the current terminal, to create and activate the configuration, in order to return and complete the process later. |
19) Specify whether the time zone MSK+03:00 should be set for use by processes nodejs
Do you need MSK timezone for nodejs usage? (Y/n):
Installation complete!
You can move on to installing the system on the next server!
If the system is installed on all servers, you can move on to creating and configuring domains.