PL/Proxy
And PostgreSQL
The structure of the cluster can be clearly expressed. If you are not familiar with pL/proxy and PostgreSQL clusters, you can check Skype plans for PostgreSQL to scale to 1 billion users.
This article.
The following operations are performed on three different machines. The machine names of the plproxy node are p1, and those of the database node are D1 and D2. The machine hardware configuration is as follows, while Linux-4.2, postgresql-8.3.0 and plproxy-2.0.4 are required, the installation process of pgbouncer is omitted.
Plproxy node:
Hostname: p1
Inet ADDR: 10.0.0.1
OS: Linux 2.6.9-42. elsmp
CPU: Intel (r) Xeon (r) CPU l5320 @ 1.86 GHz
Memtotal: 514440 KB
Node1 node:
Hostname: d1
Inet ADDR: 10.0.0.2
OS: Linux 2.6.9-42. elsmp
CPU: Intel (r) Xeon (r) CPU l5320 @ 1.86 GHz
Memtotal: 254772 KB
Node2 node:
Hostname: D2
Inet ADDR: 10.0.0.3
OS: Linux 2.6.9-42. elsmp
CPU: Intel (r) Xeon (r) CPU l5320 @ 1.86 GHz
Memtotal: 254772 KB
1. Install postgresql-8.3.0 on P1, D1, D2 and create a urtcluster Database
# Compile and install
Gunzip postgresql-8.3.0.tar.gz
Tar xf postgresql-8.3.0.tar
CD postgresql-8.3.0
./Configure-Prefix =/home/y/pgsql-with-perl
Gmake
Gmake check
Sudo gmake install
# Add UNIX user
Sudo adduser Postgres
Sudo mkdir/home/y/pgsql/Data
Sudo chown S/home/y/pgsql/Data
# Init dB and start service
Sudo-u Postgres/home/y/pgsql/bin/initdb-D/home/y/pgsql/Data
Sudo-u Postgres/home/y/pgsql/bin/Postgres-D/home/y/pgsql/data> logfile 2> & 1 &
# Create dB and use local connection
Sudo-u Postgres/home/y/pgsql/bin/createdb urtcluster
# Check whether the database has been created
Sudo-u Postgres/home/y/pgsql/bin/Psql-D urtcluster
# D1 and D2 must allow P1 access
# Edit PostgreSQL. conf to open the TCP connection Port
Sudo Vim/home/y/pgsql/data/PostgreSQL. conf
Listen_addresses = '*'
Port = 5432
# Add authentication for ipvs users
Sudo Vim/home/y/pgsql/data/pg_assist.conf
Host urtcluster ipvs 10.0.0.0/16 Trust
# Restarting the server
Sudo-u Postgres/home/y/pgsql/bin/pg_ctl-D/home/y/pgsql/Data stop
Sudo-u Postgres/home/y/pgsql/bin/Postgres-D/home/y/pgsql/data> logfile 2> & 1 &
Sudo-u Postgres/home/y/pgsql/bin/pg_ctl-D/home/y/pgsql/Data reload
2. Install plproxy-2.0.4 on p1
# Check whether the/home/y/pgsql/bin directory exists in the $ PATH variable. If not, modify your directory. in the bash_profile file, add/home/y/pgsql/bin to the path.
Echo $ path
Gunzip plproxy-2.0.4.tar.gz
Tar xf plproxy-2.0.4.tar
CD plproxy-2.0.4
Gmake
Sudo gmake install
# Create a plproxy
Sudo-u Postgres/home/y/pgsql/bin/Psql-F
/Home/y/pgsql/share/contrib/plproxy. SQL urtcluster
3. Install plpgsql on P1, D1, and D2
Sudo-u Postgres/home/y/pgsql/bin/createlang plpgsql urtcluster
4. Create a schema on p1
Sudo-u Postgres/home/y/pgsql/bin/Psql-D urtcluster
Urtcluster = # create schema plproxy;
5. initialize settings on p1
# Plproxy configuration is implemented through three functions (procedures). The standard templates for these three functions are as follows:
# This function allows plproxy to find the corresponding cluster
Create or replace function plproxy. get_cluster_partitions (cluster_name text)
Returns setof text as $
Begin
If cluster_name = 'urtcluster' then
Return next 'dbname = urtcluster host = 10.0.0.2 ′;
Return next 'dbname = urtcluster host = 10.0.0.3 ′;
Return;
End if;
Raise exception 'unknown cluster ';
End;
$ Language plpgsql;
# This function is used by plproxy to determine whether to return cached results to the front-end.
Create or replace function plproxy. get_cluster_version (cluster_name text)
Returns int4 as $
Begin
If cluster_name = 'urtcluster' then
Return 1;
End if;
Raise exception 'unknown cluster ';
End;
$ Language plpgsql;
# This function obtains the configurations of different clusters.
Create or replace function plproxy. get_cluster_config (cluster_name text, out key text, out Val text)
Returns setof record as $
Begin
Key: = 'Statement _ timeout ';
VAL: = 60;
Return next;
Return;
End;
$ Language plpgsql;
# Put the three functions in a urtclusterinit. SQL file and execute
Sudo-u Postgres/home/y/pgsql/bin/Psql-F urtclusterinit. SQL-D urtcluster-H 10.0.0.1
6. Set on D1 and D2 nodes
# Create a table users for each database Node
Create Table users (
Username text,
Email text
);
# Create an insert function for each database Node
Create or replace function insert_user (I _username text, I _emailaddress text)
Returns Integer as $
Insert into users (username, email) values ($1, $2 );
Select 1;
$ Language SQL;
# Save the function in the urtclusternodesinit_1. SQL file and execute
Sudo-u Postgres/home/y/pgsql/bin/Psql-F urtclusternodesinit_1. SQL-H 10.0.0.2-D urtcluster
Sudo-u Postgres/home/y/pgsql/bin/Psql-F urtclusternodesinit_1. SQL-H 10.0.0.3-D urtcluster
7. Set on P1 Node
# Create an insert function with the same name on the plproxy node for cluster search
Create or replace function insert_user (I _username text, I _emailaddress text)
Returns Integer as $
Cluster 'urtcluster ';
Run on hashtext (I _username );
$ Language plproxy;
# Create a query function on the plproxy node for cluster search
Create or replace function get_user_email (I _username text)
Returns text as $
Cluster 'urtcluster ';
Run on hashtext (I _username );
Select email from users where username = I _username;
$ Language plproxy;
# Save the function in the urtclusterproxyexec. SQL file and execute
Sudo-u Postgres/home/y/pgsql/bin/Psql-F urtclusterproxyexec_1. SQL-H 10.0.0.1-D urtcluster
8. Test results on p1
Sudo-u Postgres/home/y/pgsql/bin/Psql-D urtcluster
Select insert_user ('sven', 'sven @ somewhere.com ');
# It is saved to D2 and can be verified using select hashtext ('sven') & 1. It is hashed to partition 1.
Select insert_user ('marko ', 'marko @ somewhere.com ');
# It is saved to D2 and can be verified using select hashtext ('marko ') & 1. It is hashed to partition 1.
Select insert_user ('Steve ', 'Steve @ somewhere. CM ');
# It is saved to D1 and can be verified using select hashtext ('Steve ') & 1. It is hashed to partition 0.
Select get_user_email ('sven ');
Select get_user_email ('marko ');
Select get_user_email ('Steve ');
Icyriver
, Internet
, PostgreSQL
, Programming
, Technique
«
Guide to PostgreSQL database cluster and PL/proxy configuration and installation
» Installation and configuration of PostgreSQL and slony-I
Published by Wesley. He
May 12th, 2008
In icyriver
And Technique
.
Browsing times: 13775
Tags: icyriver
, Internet
, PostgreSQL
, Programming
, Technique
.
With MySQL
By sun
After the acquisition, many web developers and architects began to pay attention to PostgreSQL. Indeed, PostgreSQL
And slony-I
, PL/Proxy
Pgbouncer
A complete enterprise-level database storage solution can be provided for us. Its Web architecture is shown in:
Among them, PostgreSQL
And PL/Proxy
The installation and configuration of has been introduced in the previous blog. below is slony-I
To synchronize the Primary and Secondary databases. I will introduce pgbouncer later
Installation and configuration
1. Primary and secondary database machines
MASTER:
Hostname: m_db
Inet ADDR: 10.0.0.11
OS: Linux 2.6.9-42. elsmp
CPU: Intel (r) Xeon (r) CPU l5320 @ 1.86 GHz
Memtotal: 254772 KB
Pgsql: postgresql-8.3.0
SLAVE:
Hostname: s_db
Inet ADDR: 10.0.0.12
OS: Linux 2.6.9-42. elsmp
CPU: Intel (r) Xeon (r) CPU l5320 @ 1.86 GHz
Memtotal: 514440 KB
Pgsql: postgresql-8.3.0
# Install the postgresql-8.3.0 on m_db and s_db. for installation and configuration, see my previous blog
Make sure that the superuser is ipvs and the database name is urt.
# Check whether the Super User ipvs on m_db and s_db can access the target machine.
# Execution on m_db and s_db respectively
Sudo-u Postgres/home/y/pgsql/bin/createlang plpgsql URT
# Create the same table accounts in the URT database of m_db and s_db respectively.
2. Install slony-I
# Install slony-I on m_db and s_db respectively
Tar xfj slony1-1.2.13.tar.bz2
CD slony1-1.2.13
./Configure-with-pgconfigdir =/home/y/pgsql/bin
Gmake all
Sudo gmake install
3. slony config
Create the urt_replica_init.sh file:
##############################
#! /Bin/sh
Slonik =/home/y/pgsql/bin/slonik
# Slonik Executable File Location
Cluster = URT
# Name of your cluster
Set_id = 1
# Your replica Set Name
Master = 1
# Master server ID
Host1 = m_db
# Source database IP address or host name
Dbname1 = URT
# Source database to be copied
Slony_user = Postgres
# Super User Name of the source database
Slave = 2
# Slave server ID
Host2 = s_db
# Destination database IP address or host name
Dbname2 = URT
# Target database to be copied
Pgbench_user = Postgres
# Target library Username
$ Slonik <_ EOF _
# Define the cluster name
Cluster name = $ cluster;
# Define a replication Node
Node $ master admin conninfo = 'dbname = $ dbname1 host = $ host1 user = $ slony_user ';
Node $ slave admin conninfo = 'dbname = $ dbname2 host = $ host2 user = $ pgbench_user ';
# Initialize the cluster and master node. The ID starts from 1. If there is only one cluster, it must be 1.
# You can write comments of your own in comment.
Init cluster (ID = $ master, comment = 'Primary node ');
# Below are slave nodes
Store node (ID = $ slave, comment = 'slave node ');
# Configuring the connection information between the master and slave nodes is to tell the slave server how to access the master server.
# The following are the connection parameters of the master node.
Store path (Server = $ master, client = $ slave,
Conninfo = 'dbname = $ dbname1 host = $ host1 user = $ slony_user ');
# The following are the connection parameters of the slave node.
Store path (Server = $ slave, client = $ master,
Conninfo = 'dbname = $ dbname2 host = $ host2 user = $ pgbench_user ');
# Set the role in replication. The master node is the original provider and the slave node is the receiver.
Store listen (origin = $ master, provider = 1, consumer ER = 2 );
Store listen (origin = $ slave, provider = 2, consumer ER = 1 );
# Create a replica set. The Id also starts from 1.
Create set (ID = $ set_id, origin = $ master, comment = 'all pgbench tables ');
# Add a table to your own replica set. Add a set command for each table to be copied. The ID starts from 1 and increments progressively, step by step to 1;
# Fully qualified name is the full name of the Table: schema name. Table Name
# The replica Set ID here must be consistent with the previous replica Set ID
Set Add Table (Set ID = $ set_id, origin = $ master,
Id = 1, fully qualified name = 'public. accounts ',
Comment = 'table accounts ');
_ EOF _
########################
# Run the following command on m_db or s_db:
./Urt_replica_init.sh
4. slony start
Create the master. Slon file:
########################
Cluster_name = "URT"
Conn_info = "dbname = URT host = m_db user = s"
########################
Create the slave. Slon file:
########################
Cluster_name = "URT"
Conn_info = "dbname = URT host = s_db user = s"
########################
# Execute on m_db
/Home/y/pgsql/bin/Slon-F master. Slon> master. log &
# Execute on s_db
/Home/y/pgsql/bin/Slon-F slave. Slon> slave. log &
5. slony subscribe
Create the urt_replica_subscribe.sh file:
########################
#! /Bin/sh
Slonik =/home/y/pgsql/bin/slonik
# Slonik Executable File Location
Cluster = URT
# Name of your cluster
Set_id = 1
# Your replica Set Name
Master = 1
# Master server ID
Host1 = m_db
# Source database IP address or host name
Dbname1 = URT
# Source database to be copied
Slony_user = Postgres
# Super User Name of the source database
Slave = 2
# Slave server ID
Host2 = s_db
# Destination database IP address or host name
Dbname2 = URT
# Target database to be copied
Pgbench_user = Postgres
# Target library Username
$ Slonik <_ EOF _
# Define the cluster name
Cluster name = $ cluster;
# Define a replication Node
Node $ master admin conninfo = 'dbname = $ dbname1 host = $ host1 user = $ slony_user ';
Node $ slave admin conninfo = 'dbname = $ dbname2 host = $ host2 user = $ pgbench_user ';
# Submit a replica set
Subscribe set (ID = $ set_id, provider = $ master, worker ER = $ slave, forward = No );
_ EOF _
########################
# Run the following command on m_db or s_db:
./Urt_replica_subscribe.sh
6. Test
Modify the accounts table in the URT data on m_db. The accounts table on s_db also changes.
It is the internal architecture of slony-I. We can see that slony-I uses many threads to transfer and distribute events and data.
Icyriver
, Internet
, PostgreSQL
, Programming
, Technique