PostgreSQL Synchronous--slony-i Configuration

Source: Internet
Author: User
Tags postgresql unix domain socket

From: http://bbs.chinaunix.net/thread-955564-1-1.html

Reference documents
http://bbs.chinaunix.net/viewthr ... page% 3D1 # pid6997819
http://topic.csdn.net/t/20050617/12/4089010.html
postgresql data synchronization
Slony1 is a replication technology based on postgresql's asynchronous notification mechanism, and its synchronization speed is very fast. Using this replication technology for backup, hehe, except for the slightly more complicated configuration, it is very easy to use!
1. Overall demand
1.1. Current situation
As the complexity of software systems increases, distributed deployment has become a popular method of software deployment. For the structure of the system,
Programs and data are the two major elements supporting the system. There are many good solutions for distributed deployment of programs, here
Let me talk about the distributed deployment of data. The distributed deployment of data is actually the distributed deployment of databases.
1.2. System environment
Here, I will detail the deployment process in the following environment.
Master database server (master)
RHEL3
ip 192.168.0.200
mask 255.255.255.0
From the database server (slave)
RHEL3
ip 192.168.30.201
mask 255.255.255.0
It is necessary to ensure that the two machines are interconnected.
Please go to the specified destination to download the specified package:
db postgresql 8.2.4.tar.gz (http://www.postgresql.org/download/)
slony1 slony1-1.2.6.tar (http://www.postgresql.org/download/)
The above URL is an entry address, please select the correct and correct source package.
1.3. System installation
1.3.1 Primary database server
1.3.1.1 Installing the database
Groupadd postgres
Useradd postgres --g postgres --d / home / postgres
Unzip, command tar -xvzf postgresql 8.2.4.tar.gz
Enter the corresponding postgresql-8.2.4 directory, command: cd postgresql-8.2.4
./configure --prefix = / usr / local / pgsql --localstatedir = / home / postgres / data
gmake, command: gmake
Note that it is gmake
Install, command gmake install
Add environment parameters for postgres users as follows:
Vi /home/postgres/.bash_profile
PGLIB = / usr / local / pgsql / lib
PGDATA = / test / spescso / data
PATH = $ PATH: / usr / local / pgsql / bin
MANPATH = $ MANPATH: / usr / local / pgsql / man
export PGLIB PGDATA PATH MANPATH
chown postgres.postgres / usr / local / pgsql --R
Su – postgres
Mkdir data
Create database cluster using postgres
/ usr / local / pgsql / bin / initdb -E UTF-8 / home / postgres / data /
/ usr / local / pgsql / bin / createuser -a -d ssuser
/ usr / local / pgsql / bin / createlang plpgsql template1
Use postgres to create the log directory
mkdir / test / spescso / data / log
Modify the /test/spescso/data/postgresql.conf file
Main configuration log file
log_destination = ‘stderr’
redirect_stderr = true
log_directory = ‘/ test / spescso / data / log /‘
log_filename = ‘postgresql-% Y-% m-% d_% H% M% S.log’
Modify /test/spescso/data/pg_hba.conf, mainly to solve the problem of mutual authentication between the two machines, otherwise they cannot access each other
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 192.168.0.0/24 trust
# IPv6 local connections:
host all all :: 1/128 trust
If you cannot understand the above, please read the postgresql database security certification document carefully.
Use the postgres user background to start the postmaser database master process
/ usr / local / pgsql / bin / postmaster -i -D / test / spescso / data / -p 5432 &
1.3.1.2 Install slony1 data synchronization tool
Unzip, command tar -xvjf slony1-1.2.6.tar
Note that with the -j parameter, the website compressed package is a bit inconsistent.
Enter the corresponding slony1-1.2.6 directory, command: cd slony1-1.2.6
./configure --with-pgsourcetree = <postgresql source code directory>
gmake, command: gmake
Note that it is gmake
Install, command gmake install
1.3.2 From the database server
The installation method is the same as that of the main database server.
1.3.3 Establish database and data table
The following is an example of setting up the main database and data table test on the main database server. For other databases and data tables, please refer to the establishment.
su -c "/ usr / local / pgsql / bin / createdb -U ssuser -E UTF-8 test -p 5432" -l postgres
su -c "/ usr / local / pgsql / bin / psql -f /home/hzh/share/sql.txt -p 5432 -dtest -Ussuser" -l postgres
(Note that sql.txt is a command to create a data table, please write your own. The sql.txt file is preferably in UTF-8 format, especially when there are Chinese characters)
Example: sql.txt
CREATE TABLE tb_depart (
Id int primary key,
Name char ();
Create testslave1, testslave2 in turn on the main database machine.
Create testslave3 on port 5431 of the slave database machine
Ensure that the same table tb_depart / tb_manager / tb_user is created in testslave1 / testslave2 / testsalve3 as in test ...
1.4 Configuration synchronization
1.4.1. Host configuration
Write configmaster sheul script file, set its executable attributes, command chmod a + x + w + r configmaster, the file content is as follows:
#! / bin / bash
basebin = / usr / local / pgsql / bin
#Cluster name
CLUSTER = slony_test1
# The name of the database involved in synchronization, master db is test, the other three are slaves
DBSERVER = test
DBSLAVE1 = testslave1
DBSLAVE2 = testslave2
DBSLAVE3 = testslave3
# Machine address participating in synchronization
HOSTSERVER = 192.168.0.200
HOSTSLAVE1 = 192.168.0.200
HOSTSLAVE2 = 192.168.0.200
HOSTSLAVE3 = 192.168.0.201
# Database user name participating in synchronization
DBSERVER_USER = ssuser
DBSLAVE1_USER = ssuser
DBSLAVE2_USER = ssuser
DBSLAVE3_USER = ssuser
# Synchronous release configuration, the following are the parameters of the command slonik
$ basebin / slonik << _ EOF_
cluster name = $ CLUSTER;
# Define replication node
node 1 admin conninfo = ‘dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432’;
node 2 admin conninfo = ‘dbname = $ DBSLAVE1 host = $ HOSTSLAVE1 user = $ DBSLAVE1_USER port = 5432’;
node 3 admin conninfo = ‘dbname = $ DBSLAVE2 host = $ HOSTSLAVE2 user = $ DBSLAVE2_USER port = 5432’;
node 4 admin conninfo = ‘dbname = $ DBSLAVE3 host = $ HOSTSLAVE3 user = $ DBSLAVE3_USER port = 5431’;
# Initialize the cluster, id starts from 1
init cluster (id = 1, comment = ‘Node 1’);
# Set the data table to participate in synchronization
#First create a copy set, id also starts from 1
#Add a table to your own copy set, one set command for each table that needs to be copied
#id starts from 1 and increments one by one, with a step of 1;
#fully qualified name is the full name of the table: schema name. table name
#The id of the replication set here needs to be the same as the id of the replication set created earlier
create set (id = 1, origin = 1, comment = ‘All test tables’);
set add table (set id = 1, origin = 1, id = 1, fully qualified name = ‘public.tb_depart‘, comment = ‘Table tb_depart’);
set add table (set id = 1, origin = 1, id = 2, fully qualified name = ‘public.tb_user‘, comment = ‘Table tb_user’);
set add table (set id = 1, origin = 1, id = 3, fully qualified name = ‘public.tb_manager‘, comment = ‘Table tb_manager’);
#If a table does not have a primary key, but has a unique key, then you can use the key keyword
#Specify it as a copy key, such as the following key parameter
#set add table (set id = 1, origin = 1, id = 4, fully qualified name = ‘public.history‘, key = "column", comment = ‘Table history’);
#For tables without unique columns, this needs to be handled, this sentence is placed in front of create set
#table add key (node id = 1, fully qualified name = ‘public.history’);
# Set the result set like this
#set add table (set id = 1, origin = 1, id = 4, fully qualified name = ‘public.history‘, comment = ‘history table’, key = serial);
# Set storage node
store node (id = 2, comment = ‘Node 2’);
store node (id = 3, comment = ‘Node 3’);
store node (id = 4, comment = ‘Node 4’);
# Set storage path
store path (server = 1, client = 2, conninfo = ‘dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432’);
store path (server = 2, client = 1, conninfo = ‘dbname = $ DBSLAVE1 host = $ HOSTSLAVE1 user = $ DBSLAVE1_USER port = 5432’);
store path (server = 1, client = 3, conninfo = ‘dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432’);
store path (server = 3, client = 1, conninfo = ‘dbname = $ DBSLAVE2 host = $ HOSTSLAVE2 user = $ DBSLAVE2_USER port = 5432’);
store path (server = 1, client = 4, conninfo = ‘dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432’);
store path (server = 4, client = 1, conninfo = ‘dbname = $ DBSLAVE3 host = $ HOSTSLAVE3 user = $ DBSLAVE3_USER port = 5431’);
#Set the listening event and subscription direction, copy the role, the master node is the original provider, and the slave node is the receiver
store listen (origin = 1, provider = 1, receiver = 2);
store listen (origin = 2, provider = 2, receiver = 1);
store listen (origin = 1, provider = 1, receiver = 3);
store listen (origin = 3, provider = 3, receiver = 1);
store listen (origin = 1, provider = 1, receiveR = 4);
store listen (origin = 4, provider = 4, receiver = 1);
_EOF_
1.4.2. Submit data collection
Write a commitdata shell script file, granting executable permissions, as follows:
#! / bin / bash
basebin = / usr / local / pgsql / bin
CLUSTER = slony_test1
DBSERVER = test
DBSLAVE1 = testslave1
DBSLAVE2 = testslave2
DBSLAVE3 = testslave3
HOSTSERVER = 192.168.0.200
HOSTSLAVE1 = 192.168.0.200
HOSTSLAVE2 = 192.168.0.200
HOSTSLAVE3 = 192.168.0.201
DBSERVER_USER = ssuser
DBSLAVE1_USER = ssuser
DBSLAVE2_USER = ssuser
DBSLAVE3_USER = ssuser
$ basebin / slonik << _ EOF_
cluster name = $ CLUSTER;
#Provide connection parameters
node 1 admin conninfo = ‘dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432’;
node 2 admin conninfo = ‘dbname = $ DBSLAVE1 host = $ HOSTSLAVE1 user = $ DBSLAVE1_USER port = 5432’;
node 3 admin conninfo = ‘dbname = $ DBSLAVE2 host = $ HOSTSLAVE2 user = $ DBSLAVE2_USER port = 5432’;
node 4 admin conninfo = ‘dbname = $ DBSLAVE3 host = $ HOSTSLAVE3 user = $ DBSLAVE3_USER port = 5431’;
# Submit subscription copy set
subscribe set (id = 1, provider = 1, receiver = 2, forward = no);
subscribe set (id = 1, provider = 1, receiver = 3, forward = no);
subscribe set (id = 1, provider = 1, receiver = 4, forward = no);
_EOF_
1.4.3. Configuration synchronization process
Execute the configuration command ./configmaster on the host
Start the slon background process on the host, start the main database replication, command
/ usr / local / pgsql / bin / slon slony_test1 "dbname = test host = 192.168.0.200 user = ssuser port = 5432" &
Start the slon background process on the host, start the first copy from the database, the command
/ usr / local / pgsql / bin / slon slony_test1 "dbname = testslave1 host = 192.168.0.200 user = ssuser port = 5432" &
Start the slon background process on the host, start the second copy from the database, the command
/ usr / local / pgsql / bin / slon slony_test1 "dbname = testslave2 host = 192.168.0.200 user = ssuser port = 5432" &
Start the slon background process on the slave, start the third copy from the database, command
/ usr / local / pgsql / bin / slon slony_test1 "dbname = testslave3 host = 192.168.0.201 user = ssuser port = 5431" &
Execute the commit command on the host./commitdata

POSTGRESQL synchronization-SLONY-I configuration

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.