The first section creates multiple PostgreSQL instances on a single host
Step One: Install the PostgreSQL software
Install PostgreSQL instances, download PostgreSQL installation software from the PostgreSQL website https://www.postgresql.org/ , unzip, create postgres users and groups, add environment variables.
The version I downloaded here is pgsql9.5.1.
Create a PostgreSQL instance
Once the PostgreSQL software is installed, there are three steps to creating a PostgreSQL instance: creating the instance directory, creating the database cluster initialization database (command Initdb), starting the instance service (with the Pg_ctl command), and configuring it.
Step One: Create directories and files:
Su-postgres
Mkdir-p/DATA/PGSQL/{DATA,DATA02}
Step Two: Initialize instance 1:
Export Pgdata=/data/pgsql/data
Initdb
Step three: Start the PostgreSQL instance 1
pg_ctl-d/data/pgsql/data-l/data/pgsql/data/postgres.log Start
Instance 2 can also be created by following the three steps above:
Initial Instance 2:
Export PGDATA=/DATA/PGSQL/DATA02
env | grep PGDATA
Initdb
Launch Instance 2:
pg_ctl-d/data/pgsql/data02-l/data/pgsql/data/postgres02.log Start
If you want to close the database, you can use the Envy command to stop the database:
Pg_ctl stop-d/data/pgsql/data02-m Fast
Step Two: Configure instance listening and external access
Modify the boot file and port of instance 2, if there are multiple instances on one server, each instance port will be different, in order to allow the client access, need to add a listener in the configuration file, add Sanskrit permissions in pg_hba.conf:
Modify the PostgreSQL instance to launch the configuration parameters file:
Vim postgresql.conf
listen_addresses = ' * ' # What IP address (es) to listen on;
Port = 5433 # (change requires restart)
max_connections = # (change requires restart)
Once these parameters have been modified, the PostgreSQL instance restarts to take effect, and the PostgreSQL instance reboots to listen for 5433 ports on all IPs on the server.
The commands to restart the PostgreSQL instance are:
Pg_ctl stop-d/data/pgsql/data02-m Fast
pg_ctl-d/data/pgsql/data02-l/data/pgsql/data/postgres02.log Start
To modify the PostgreSQL instance Access file:
Vim pg_hba.conf
# IPV4 Local connections:
Host All 192.168.1.0/24 Trust
When the above parameter is added, the PostgreSQL instance allows the host of the 192.168.1.0/24 network segment to access the libraries that exist in the instance through a password-authenticated way for each user.
If you want to make a strict qualification, you can modify the relevant section accordingly. After the file is modified, the configuration file needs to be reloaded to take effect:
Pg_ctl Reload
After the instance has started and modified access permissions, the format of the connection PostgreSQL instance is:
Psql-h <ip address >-P < port > [database name] [user name]
Example:
Psql-h 192.168.10.119-p 5432 TestDB postgres
Psql-h 192.168.10.119-p 5433 Postgres postgres
This is done from installing the PostgreSQL software to creating, configuring, and using PostgreSQL multiple instances.
############################
The second part installs the Slony-i software under the Linux system
General Linux distributions will bring their own slony-i, but the original version is older, you can download the latest version from Slony's official website to install.
Compiling and installing Slony-i software
On the official website of Slony http://www.slony.info/, click Dowload to enter the download page (note: Since my PostgreSQL version is relatively new 9.5, the initial use of slony1-2.1.3 version has an error, so installed the latest slony1-2.2.5 version);
Select the appropriate version, choose 2.2 here;
Pop-up interface choose to download the source code directly, or use the CVS interface, here select "Direct download source";
Select the iteration you want to download, choose 2.2.5 here, and the filename is called Slony1-2.2.5.tar.br2
When the download is complete, place the downloaded source package in a directory and unzip it using the command:
Cd/usr/local/src
RZ slony1-2.2.5.tar.bz2
Tar xvf slony1-2.2.5.tar.bz2
Unzip the build directory slony1-2.2.5, enter this directory, compile and install with Linux kick:
CD slony-2.2.5
./configure--with-perltools-with-pgconfigdir=/usr/local/pgsql/bin
Make
Make install
Note compile, you need to find the Pg_config executable, if in the Pgsql directory, you can specify config directly, if not in the bin directory, you need to specify the directory of With-pgconfigdir parameters;
In addition Slony-i a set of Perl script tools, named Altperl scripts, can simplify the configuration of Slony-i, the default compilation will not take the script, need to compile, with--with perltools to compile;
After compiling the kick, the Slony-i installation is completed, and the general Slony-i is installed in the directory where the PostgreSQL database resides.
Which Postgres
Which Slon
Which Slonik
The installation process is:
The results are verified as:
############################
Part III Configuring and verifying SLONY-I Replication
Step one: Plan and create a replication environment:
The instance ports and library names for the two PostgreSQL that were replicated are as follows:
192.168.10.119:5432 Master
192.168.10.119:5433 slave
In both databases, create a super user named Slony, which is used for slony-i, with the following command:
Create user slony superuser password ' slonytest ';
Create a database called Master on the main library, as the source database for replication, with the command:
Psql-h 192.168.10.119-p 5432 Postgres postgres
Create Database Master;
ALTER DATABASE master owner to Slony;
Create a target database named slave on the standby as the target database for replication, the command is:
Psql-h 192.168.10.119-p 5433 Postgres postgres
Create database slave;
ALTER DATABASE slave owner to Slony;
Subsequent replication is performed between the master database and the slave database, and in order to demonstrate replication, a test table is built in the master library and the Slave library, with the following command:
CREATE TABLE SYNCTAB01 (ID int primary key, note text);
To enable the machines to Shong each other, add the following to the pg_hba.conf file:
Hostallall192.168.1.0/24md5
In order for the note to take effect, run the following command:
Pg_ctl Reload
Confirm the database listening, has been listening to the local IP, you can modify the listening options in the postgresql.conf file:
listen_addresses = ' * '
Test on two machines, respectively, can use Slony user for database access:
Psql-h 192.168.10.119-p 5432-u slony-d Master
Psql-h 192.168.10.119-p 5433-u slony-d Slave
If the authentication is properly accessible, then the pre-replication PostgreSQL environment is ready to be completed.
Next, use the Altperl scripts script to complete the configuration of Slony data synchronization.
Step two: Edit the slon_tools.conf configuration file
First, configure a configuration file named Slon_tools.conf, which will be placed in the/USR/LOCAL/ETC directory.
After installing Slony-i, this directory will have a sample file slon_tools.conf-sample, copy the file into slon_tools.conf, and then modify it.
Cd/usr/local/etc
Ls
CP Slon_tools.conf-sample slon_tools.conf
Modify the file, the name of the cluster in the file is changed to the name of the configuration, this is changed to Cluster01, the command is:
if ($ENV {"Slonynodes"}) {
Require $ENV {"Slonynodes"};
} else {
# $CLUSTER _name = ' replication ';
$CLUSTER _name = ' Cluster01 ';
# $PIDFILE _dir = '/var/run/slony1 ';
$PIDFILE _dir = '/usr/local/pgsql/log ';
# $LOGDIR = '/var/log/slony1 ';
$LOGDIR = '/usr/local/pgsql/log ';
#目前只有两个主从复制, you only need to configure two servers to
Add_node (node = 1,
Host = ' 192.168.10.119 ',
dbname = ' master ',
Port = 5432,
user = ' Slony ',
Password = ' slonytest ');
Add_node (node = 2,
Host = ' 192.168.10.119 ',
dbname = ' slave ',
Port = 5433,
user = ' Slony ',
Password = ' slonytest ');
}
# Set the configuration item for the replica set, and the primary configuration is to synchronize those tables and sequences.
$SLONY _sets = {
"Set1_name" = {
"set_id" = 1,
"table_id" = 1,
"sequence_id" = 1,
"Pkeyedtables" = ["Synctab01"],
"Keyedtables" = {},
"Sequences" = [],
},
};
# The cluster name above, the log path, the synchronization primary key table in the replication set, the unique key table, the sequence, and so on are all configured to copy the slon_tools.conf file to the other machine. Initializes the cluster.
After a node is configured, copy the configured slon_tools.conf files to the other Slony machines.
SCP slon_tools.conf 192.168.10.119: ' pwd '/.
Step three: Start the synchronization service
After the Slony configuration file is configured and copied to each node in the cluster, the synchronization service needs to be initialized and started.
Initialize the cluster, execute Slonik_init_cluster | The Slonik command is as follows:
[Email protected] etc]$ slonik_init_cluster | Slonik
<stdin>:10:set Up Replication nodes
<stdin>:13:next:configure paths for each node/origin
<stdin>:16:replication nodes prepared
<stdin>:17:please Start a Slon replication daemon for each node
[Email protected] etc]$
The command Slonik_init_cluster reads the contents of the configuration file slon_tools.conf and automatically initializes the synchronization configuration of the Slony cluster in the database. Some later Slony scripts also read the configuration file slon_tools.conf.
On the master node, start the Slony daemon, and the ordinal of the command corresponds to the node number of the master node in the configuration file.
[Email protected] etc]$ Slon_start 1
Invoke Slon for Node 1-/usr/local/pgsql9.5.1/bin//slon-p/var/run/slony1/cluster01_node1.pid-s 1000-d2 Cluster01 ' ho st=192.168.10.119 dbname=master user=slony port=5432 password=slonytest ' >/usr/local/pgsql/log/node1/ Master-2016-11-19.log 2>&1 &
Slon failed to start for cluster Cluster01, node Node1
If you start the error above, you need to follow the log prompt, after resolving the error, start again:
[Email protected] etc]$ Slon_start 1
Invoke Slon for Node 1-/usr/local/pgsql9.5.1/bin//slon-p/usr/local/pgsql/log/cluster01_node1.pid-s 1000-d2 Cluster0 1 ' host=192.168.10.119 dbname=master user=slony port=5432 password=slonytest ' >/usr/local/pgsql/log/node1/ Master-2016-11-19.log 2>&1 &
Slon successfully started for cluster Cluster01, node Node1
PID [11298]
Start the watchdog process as well ...
On the slave node, start the Slony daemon, and the ordinal of the command corresponds to the node number of the slave node in the configuration file.
[Email protected] etc]$ Slon_start 2
Invoke Slon for Node 2-/usr/local/pgsql9.5.1/bin//slon-p/var/run/slony1/cluster01_node2.pid-s 1000-d2 Cluster01 ' ho st=192.168.10.119 dbname=slave user=slony port=5433 password=slonytest ' >/usr/local/pgsql/log/node2/ Slave-2016-11-19.log 2>&1 &
Slon failed to start for cluster Cluster01, node Node2
Start again:
[Email protected] etc]$ Slon_start 2
Invoke Slon for Node 2-/usr/local/pgsql9.5.1/bin//slon-p/usr/local/pgsql/log/cluster01_node2.pid-s 1000-d2 Cluster0 1 ' host=192.168.10.119 dbname=slave user=slony port=5433 password=slonytest ' >/usr/local/pgsql/log/node2/ Slave-2016-11-19.log 2>&1 &
Slon successfully started for cluster Cluster01, node Node2
PID [11323]
Start the watchdog process as well ...
Error handling when starting the Slony daemon:
The first time to start the times wrong, can not open the PID file, modify the file path:
[Email protected] etc]$ cat/usr/local/pgsql/log/node1/master-2016-11-19.log
2016-11-19 11:51:41 CST CONFIG Main:slon version 2.2.5 starting up
2016-11-19 11:51:41 CST FATAL cannot open pid_file "/var/run/slony1/cluster01_node1.pid"
Start again and it will start normally.
Using the command "Slonik_create_set 1 | Slonik "Create the data set, as follows:
[Email protected] etc]$ slonik_create_set 1 | slonik
<stdin>:11:subscription Set 1 (set1_name) created
<stdin>:12:adding tables to the subscription set
<stdin>:16:add Primary keyed Table PUBLIC.SYNCTAB01
<stdin>:19:adding sequences to the subscription set
<stdin>:20:all Tables Added
[Email protected] etc]$
Using the command "Slonik_subscribe_set 1 2 | Slonik "Add data subscribers, where the first digit of the command 1 represents the synchronization set number, and the second number 2 represents the node number of the data subscriber, as follows:
[Email protected] etc]$ slonik_subscribe_set 1 2 | Slonik
<stdin>:6:subscribed nodes to set 1
At this, the synchronization is configured.
Step four: Verify the Sync effect
Test the sync effect below.
Add a record on the Master Master library:
postgres=# \c Master
Connected to Database "master" as user "Postgres".
master=# \d
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
Public | SYNCTAB01 | Table | Postgres
(1 row)
master=#
master=# SELECT * from SYNCTAB01;
ID | Note
----+------
(0 rows)
master=#
master=# INSERT INTO SYNCTAB01 values (1, ' 111 ');
INSERT 0 1
master=#
master=# SELECT * from SYNCTAB01;
ID | Note
----+------
1 | 111
(1 row)
Confirm the data status on slave from the library:
postgres=# \c Slave
Connected to Database "slave" as user "Postgres".
slave=#
slave=# \d
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
Public | SYNCTAB01 | Table | Postgres
(1 row)
slave=#
slave=# SELECT * from SYNCTAB01;
ID | Note
----+------
(0 rows)
slave=# SELECT * from SYNCTAB01;
ID | Note
----+------
1 | 111
(1 row)
At this point, the master-slave synchronization between the two PostgreSQL single instances is confirmed.
This article is from the "Yumushui column" blog, be sure to keep this source http://yumushui.blog.51cto.com/6300893/1874598
PostgreSQL multi-instance creation and SLONY-I replication configuration