POSTGRES-XL Cluster construction and testing

Source: Internet
Author: User
Tags hash md5 psql ssh

POSTGRES-XL Cluster construction and testing

CentOS6.5  192.168.0.101
CentOS6.5  192.168.0.102
CentOS6.5  192.168.0.103
CentOS6.5  192.168.0.104

First, host planning
Cnode1 (GTM)
cnode2 (Gtm_proxy,coordinator,datanode)
cnode3 (Gtm_proxy,coordinator,datanode)
cnode4 ( Gtm_proxy,coordinator,datanode)
Cnode5 (Gtm_proxy,coordinator,datanode)  

Second, configure host hosts on each node
#vim/etc/hosts
192.168.0.101 cnode1
192.168.0.102 cnode2
192.168.0.103 cnode3


Third, installation POSTGRES-XL

1. Install the dependency package on each node


Or:
Yum-y Install flex
yum-y Install Bison
yum-y install openjade.x86_64
yum-y install Jadetex.noarch
Yum -Y Install docbook*



2. Install POSTGRES-XL on each of the nodes
# TAR-ZXVF postgres-xl-v9.2-src.tar.gz
# cd POSTGRES-XL
#./configure--prefix=/usr/local/pgxl-9.2
# make


3. Install the cluster Management tool in Cnode1 (GTM) pgxc_ctl
Unzip the source code on a GTM or any machine

# TAR-ZXVF postgres-xl-v9.2-src.tar.gz
# cd/opt/pgxl/postgres-xl/contrib/pgxc_ctl/
# make
# Install

This allows the PGXC_CTL to be installed, and subsequent cluster initialization, startup, and stop can be done with this command.
The/home/postgres/pgxc_ctl directory is generated by default, and the configuration file pgxc_ctl.conf under that directory is read by default when the Pgxc_ctl command is executed.


Iv. Configuring the Cluster

1. Create a user postgres on each node

Create user Postgres for each host.
# Useradd Postgres
# passwd Postgres
# Enter Password 12345678

2. Set environment variables on each node
# Su-postgres
$ vi. bashrc #注意这里环境变量一定要添加到. bashrc file, otherwise you will not find the relevant command

Export pghome=/usr/local/pgxl-9.2
export pguser=postgres
export ld_library_path= $PGHOME/lib

3. Assign directory permissions to each node
You need to create a directory on Datanode during installation, so you need to have write access to the $pghome directory.
Execute the following command on each Datanode and coordinator host:
Chown-r postgres:postgres/usr/local/pgxl-9.2


Five, configure the password-free connection between each node (Pgxc_ctl initialization of the cluster needs to be connected to other nodes)
Ssh-keygen-t RSA (in ~/.ssh directory)
Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
If it is not installed locally, the contents of the file need to be added to the Authorized_keys file of other machines that need to be installed.
SCP Authorized_keys postgres@cnode2:/home/postgres/.ssh/
SCP Authorized_keys postgres@cnode3:/home/postgres/. ssh/


This way posgres users can login Node1 to node4 host without password.

To modify Authorized_keys permissions:
chmod ~/.ssh

Test password-free connection (password is required for the first time):
SCP test_scp.txt postgres@cnode2:/home/postgres/
SCP test_scp.txt postgres@cnode3:/home/postgres/
SCP test_ Scp.txt postgres@cnode4:/home/postgres/


I am performing the installation process on the GTM host, so I need to execute the above command on the GTM machine and generate
The Id_rsa.pub file is appended to the. Ssh/authorized_keys of other Datanode and coordinator hosts.

Vi. Configuring the cluster and initializing the cluster using Pgxc_ctl

First of all:

Create the following directories on each coordinator and data node and empower:
Mkdir-p/data/pg/pg92data
chown-r postgres:postgres/data/pg/pg92data

Secondly:
Create a cluster configuration file on the #在cnode1节点 (GTM): pgxc_ctl.conf
Create a pgxc_ctl.conf file #在目录 the/home/postgres/pgxc_ctl directory
#每一个节点的配置信息的具体含义参考: http://files.postgres-xl.org/documentation/pgxc_ctl.html

#pgxc_ctl. conf the contents as follows:
===========================

#user and path pgxcowner=postgres pgxcuser= $pgxcOwner pgxcinstalldir=/usr/pgxl-9.2 #gtm and Gtmproxy gtmmasterdir=/usr/
Pgxl-9.2/gtm gtmmasterport=6666 gtmmasterserver=cnode1 gtmslave=n #gtm proxy gtmproxy=y Gtmproxydir=/data/pg/pg92data
gtmproxynames= (gtm_pxy1 gtm_pxy2 gtm_pxy3) gtmproxyservers= (cnode2 cnode3 cnode4) gtmproxyports= (20001 20001 20001)
gtmproxydirs= ($gtmProxyDir/gtm_pxy1 $gtmProxyDir/gtm_pxy2 $gtmProxyDir/gtm_pxy3) gtmpxyextraconfig=none Gtmpxyspecificextraconfig= (none None) #coordinator coordmasterdir=/data/pg/pg92data coordnames= (coord1 coord2
COORD3) coordports= (5432 5432 5432) poolerports= (20010 20010 20010) coordpghbaentries= (172.17.210.0/32) coordmasterservers= (Cnode2 cnode3 cnode4) coordmasterdirs= ($coordMasterDir/coord1 $coordMasterDir/coord2 $ COORDMASTERDIR/COORD3) coordmaxwalsernder=0 coordmaxwalsenders= ($coordMaxWALsernder $coordMaxWALsernder $ Coordmaxwalsernder) Coordslave=n coordspecificextraconfig= (none None) coordspecificextrapghba= (none None)None) #datanode datanodenames= (datanode1 datanode2 datanode3) datanodeports= (15432 15432 15432) datanodepoolerports= (
20012 20012 20012) datanodepghbaentries= (172.17.210.0/32) datanodemasterservers= (cnode2 cnode3 cnode4) Datanodemasterdir=/data/pg/pg92data datanodemasterdirs= ($datanodeMasterDir/datanode1 $datanodeMasterDir/ Datanode2 $datanodeMasterDir/datanode3) datanodemaxwalsender=0 datanodemaxwalsenders= ($datanodeMaxWalSender $
 Datanodemaxwalsender $datanodeMaxWalSender) Datanodeslave=n primarydatanode=datanode1



Vii. Performing PGXC_CTL initialization cluster


Viii. configuring pg_hba.conf files for each coordinator and data node

Additional Configuration-free authentication:
Host all 192.168.0.0/32 Trust
host all 192.168.0.0/32 Trust host all
0.0.0.0/0 MD5

Nine, start the cluster


Note:
The command to close the cluster is as follows:

Ten, register the node (coordinator,datadone all need to configure)

Register the nodes on each node (except the GTM):
Psql-p 5432 postgres   //Access Coordination Node
psql-p 15432 postgres  //Access Data node

Drop node Coord1;
Drop node Coord2;
Drop node Coord3;
Drop node datanode1;
Drop node Datanode2;
Drop node datanode3;
Create node Coord1 with (type=coordinator,host= ' Cnode2 ', port=5432);
Create node Coord2 with (type=coordinator,host= ' cnode3 ', port=5432);
Create node Coord3 with (type=coordinator,host= ' cnode4 ', port=5432);
Create node Datanode1 with (type=datanode,host= ' Cnode2 ', port=15432,primary=true);
Create node Datanode2 with (type=datanode,host= ' cnode3 ', port=15432,primary=false);
Create node Datanode3 with (type=datanode,host= ' cnode4 ', port=15432,primary=false);
Alter node coord1 with (type=coordinator,host= ' Cnode2 ', port=5432);
Alter node COORD2 with (type=coordinator,host= ' cnode3 ', port=5432);
Alter node COORD3 with (type=coordinator,host= ' cnode4 ', port=5432);
Alter node datanode1 with (type=datanode,host= ' Cnode2 ', port=15432,primary=true);
Alter node Datanode2 with (type=datanode,host= ' cnode3 ', port=15432,primary=false); Alter node DATANODE3 with (type=datanode,host= ' cnode4 ', port=15432,primary=false);
Select Pgxc_pool_reload ();
 SELECT * from Pgxc_node;



Xi. Modify the Postgres password
ALTER USER postgres with PASSWORD ' 12345678 ';


12. Test Cluster

1. Testing the underlying DDL

(1). Create a role
Execute on cnode2: psql-p 5432 postgres//Access Coordination node


Note: Bsadmin is the character name you want to build, dbpwd the password for your user

(2). CREATE TABLE Space
Execute on cnode2: psql-p 5432 postgres//Access Coordination node

(3). Create a database
Execute on cnode2: psql-p 5432 postgres//Access Coordination node

(4). Enter the database
Execute on cnode2: psql-p 5432 postgres//Access Coordination node
\c book_shopping Postgres

2. Testing Distributed Storage

Set up data node grouping, distributed storage data by group
The same grouping is added #要在每一个coordinator和cnode节点 (on the same node by port), and grouping information is not synchronized automatically
Psql-h cnode2-p 5432 postgres postgres-c "Create node group GP1 with (DATANODE2,DATANODE3)";
Psql-h cnode3-p 5432 postgres postgres-c "Create node group GP1 with (DATANODE2,DATANODE3)";
Psql-h cnode4-p 5432 postgres postgres-c "Create node group GP1 with (DATANODE2,DATANODE3)";

Psql-h cnode2-p 15432 postgres postgres-c "Create node group GP1 with (DATANODE2,DATANODE3)";
Psql-h cnode3-p 15432 postgres postgres-c "Create node group GP1 with (DATANODE2,DATANODE3)";
Psql-h cnode4-p 15432 postgres postgres-c "Create node group GP1 with (DATANODE2,DATANODE3)";

Note:
If the operation on a node that does not have the GP1 group created is an error:
postgres=# CREATE TABLE t1 (ID serial8 primary key, info text, crt_time timestamp) distribute by hash (ID) to group gp1;
  
   notice:  Create TABLE would create implicit sequence "T1_id_seq" for serial column "T1.id"
ERROR:  pgxc Group GP 1:group Not defined
  

Reload the configuration of the cluster
Psql-h cnode2-p 5432 postgres postgres-c "select Pgxc_pool_reload ();";
Psql-h cnode3-p 5432 postgres postgres-c "select Pgxc_pool_reload ();";
Psql-h cnode4-p 5432 postgres postgres-c "select Pgxc_pool_reload ();";


Verify that the GP1 group exists on each node
Psql-h cnode2-p 5432 postgres postgres-c "select * from Pgxc_group;";
Psql-h cnode3-p 5432 postgres postgres-c "select * from Pgxc_group;";
Psql-h cnode4-p 5432 postgres postgres-c "select * from Pgxc_group;";
Psql-h cnode2-p 15432 postgres postgres-c "select * from Pgxc_group;";
Psql-h cnode3-p 15432 postgres postgres-c "select * from Pgxc_group;";
Psql-h cnode4-p 15432 postgres postgres-c "select * from Pgxc_group;";

You can also delete a node group by using the following command:
Psql-h cnode2-p 5432 postgres postgres-c "Delete from pgxc_group where group_name = ' gp1 ';";
Psql-h cnode3-p 5432 postgres postgres-c "Delete from pgxc_group where group_name = ' gp1 ';";
Psql-h cnode4-p 5432 postgres postgres-c "Delete from pgxc_group where group_name = ' gp1 ';";



Switch to Database book_shopping

Psql-p 5432 postgres
\c book_shopping postgres


Create a table
CREATE TABLE T1 (ID serial8 primary key, info text, crt_time timestamp) distribute by hash (ID) to group GP1;

(1). Insert Data test
INSERT into T1 (info, crt_time) Select MD5 (Random ():: Text), Clock_timestamp () from Generate_series (1,1000);

View data
can go to Cnode2 and Cnode2 view: book_shopping The amount of data in the T1 table:
Select COUNT (*) from T1;

(2). Update Data test
Update and delete operations can only be performed in coordinator
In Coord1, you can query the first 5 lines:
SELECT * from T1 limit 5;

ID |               Info               |          Crt_time          
----+----------------------------------+----------------------------
  1 | 2a85488b8f67cc249fb004345e3d387e | 2015-06-04 10:59:01.600481
  2 | cbc0b99298a92fb883b1df90ec9d2555 | 2015-06-04 10:59:01.60165
  5 | F254966bd9229619e9061fd2e7ea9bac | 2015-06-04 10:59:01.603552
  6 | b3b5db57115da5abb5c25b8084329804 | 2015-06-04 10:59:01.604219
  8 | 4ded54b3d846c5bfd608dbe95f39d2a7 | 2015-06-04 10:59:01.605467
 
in Datanode1 can be known that the data of ID 1 in Cnode2 species:
book_shopping=# select * from T1 where id=1;
 ID |               Info               |          Crt_time          
----+----------------------------------+----------------------------
  1 | 2a85488b8f67cc249fb004345e3d387e | 2015-06-04 10:59:01.600481
(1 row)


In Coord1, update the data with ID 1:
Update T1 set info= ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ' where id = 1;

book_shopping=# Update T1 set info= ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ' where id = 1;
UPDATE 1


Then back on the DATANODE1 node, query the record state with ID 1:
book_shopping=# SELECT * from T1 where id=1;
 ID |               Info               |          Crt_time          
----+----------------------------------+----------------------------
  1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | 2015-06-04 10:59:01.600481
(1 row)

(3) Query test
In Coord1, you can query the data for the individual nodes in the division:
book_shopping=# select * from T1;               ID |          info | Crt_time------+----------------------------------+----------------------------2 | cbc0b99298a92fb883b1df90ec9d2555 | 2015-06-04 10:59:01.60165 5 | F254966bd9229619e9061fd2e7ea9bac | 2015-06-04 10:59:01.603552 6 | b3b5db57115da5abb5c25b8084329804 | 2015-06-04 10:59:01.604219 8 | 4ded54b3d846c5bfd608dbe95f39d2a7 | 2015-06-04 10:59:01.605467 9 | 9E3035492918569A707FDAE0C83AD9FD | 2015-06-04 10:59:01.606088 12 | 9836e9da92da44eb1d51ad8e8d9cec67 | 2015-06-04 10:59:01.607926 13 | 94b6fbea895cd530d6e109613270f2a5 | 2015-06-04 10:59:01.608486 15 | 4A35E75632B17C67F8233D60C3B13CB1 | 2015-06-04 10:59:01.609578 17 | Cf9b88c7e50a2a984cabfd131b308c19 | 2015-06-04 10:59:01.610672 19 | 7dafeea2571910171af39e5d7f2eb7d5 | 2015-06-04 10:59:01.611785 21 | 2e5dd218b9521fcb5f0d653f09740fb7 | 2015-06-04 10:59:01.612912 23 | e91fd409c1a1f80e701e0a962243484e | 2015-06-04 on 10:59:01.614065 26 | 49a68c54f8c7e76a9770793706dc9796 | 2015-06-04 10:59:01.615807 28 | 79699ca1b507c700548560e477d9bf0c | 2015-06-04 10:59:01.616982 40 | 0eb75ff919396521584f2b833253b834 | 2015-06-04 10:59:01.623801 41 | 0302e9fbb3e7bdc714c1584191edb136 | 2015-06-04 10:59:01.624409 42 | 97692d97b5c9c11ca6108181f3573334 | 2015-06-04 10:59:01.624949 44 | 90EDDAF542E0485A96FA73FC78B059DC | 2015-06-04 10:59:01.626069 46 | f2cafc526979aa2e86a434e5de9929ed | 2015-06-04 10:59:01.627223 49 | 137e1311d07f117a2249b961eabe4001 | 2015-06-04 10:59:01.628939 50 | 9323ec806c550b537e202fd5e61d8a24 | 2015-06-04 10:59:01.629536 52 | Dc41ba4e2046ae348b2ce01033b46efe | 2015-06-04 10:59:01.630633 56 | 27ba4928937806bae5cf6c0359ab9a03 | 2015-06-04 10:59:01.632916 57 | f50ede190d3383c060fe3829c7accb79 | 2015-06-04 10:59:01.6335 62 | 16d2634d35b11d5dcd020083a12ee6eb | 2015-06-04 10:59:01.636402 64 | 840C97D994CD9EA6FEE86C6B1B5E43A1 | 2015-06-04 10:59:01.637562 66 | Aa82ea0624c44a8838e2fb7a3cb24a90 | 2015-06-04 10:59:01.638696 67 | 40535733b65ab6a5023c7d7d142c435e | 2015-06-04 10:59:01.639286 68 | c9496a076e2fbcca2a781dc665007219 | 2015-06-04 10:59:01.639821 70 | 2f0f80cf8d2f6d06d591bf68fac1c253 | 2015-06-04 10:59:01.640937 74 | 95b9eee52187d3131cd7e125eba809e1 | 2015-06-04 10:59:01.643187 76 | 32e43e6ae96bf320775beccb61b7a48f | 2015-06-04 10:59:01.644315 77 | 2d327bcc9cbbb4e155d1e8671ed71f71 | 2015-06-04 10:59:01.644962 80 | 896a5f50e47b01bab9942ca099e2aa67 | 2015-06-04 10:59:01.646691 81 | b3d5808db7235d5927055838d4666492 | 2015-06-04 10:59:01.647239 83 | fc7a8ddffdb5d4d91f8f9c0238e3a577 | 2015-06-04 10:59:01.648333 84 | c1c4ddbadeaf595a94dd1120aeb1479e | 2015-06-04 10:59:01.648877 85 | d617f8bac6d368fa46f1c00d55834349 | 2015-06-04 10:59:01.649451 87 | 11f591e7526e7145df6737090ba32556 | 2015-06-04 10:59:01.650531 90 | e0287526bdf0c9f277db3d9a94489e68 | 2015-06-04 10:59:01.652254 99 | 2197bf1dc04d3d327ec1b1440dac8249 | 2015-06-04 10:59:01.657322 101 | 5319875171DF0C35E5E4111C0A8DBEA4 | 2015-06-04 10:59:01.658466 More



You can also query for rows that match the criteria:
  SELECT * from t1 where info= ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ';
 
  book_shopping=# SELECT * from t1 where info= ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ';
 ID |               Info               |          Crt_time          
 ----+----------------------------------+----------------------------
  1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | 2015-06-04 10:59:01.600481
 (1 row)



(4) Delete test
Delete from t1 where info = ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ';


Update and delete operations can only be performed in coordinator, and if processed directly in Datanode1, an error will be added:
book_shopping=# Delete from t1 where info = ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ';
 ERROR:  cannot execute DELETE in a read-only transaction



In Cnode1, execute:
book_shopping=# Delete from t1 where info = ' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ';
 DELETE 1
 
 book_shopping=# Select COUNT (1) from T1;
 Count
-------
   999
(1 row)


2. Copy storage test
CREATE TABLE t2 (ID serial8 primary key, info text, crt_time timestamp) distribute by REPLICATION;

(1). Test Insert Data
INSERT into T2 (info, crt_time) Select MD5 (Random ():: Text), Clock_timestamp () from generate_series (1,200);

The amount of data in the T2 table in Book_shopping can be viewed to Datanode1,datanode2 and Datanode3 is 200:
Select COUNT (*) from T2;


(2). Query operation

In Coord1, perform the query operation:
SELECT * from T2 limit;

book_shopping=# select * from T2 limit;
 ID |               Info               |          Crt_time          
----+----------------------------------+----------------------------
  1 | 6251581CE28A39E92D322A00880772DF | 2015-06-04 11:33:37.100742
  2 | 35b39c91f678ce7acb406a62ab2a15af | 2015-06-04 11:33:37.101695
  3 | Ad7657b5ddb5cc2273f925432c8fee40 | 2015-06-04 11:33:37.102295
  4 | f7f0b7bf1ae3d02b34f61e3b500dfe70 | 2015-06-04 11:33:37.102902
  5 | B661A1208585E01C8ABCD7BC699C3AC4 | 2015-06-04 11:33:37.103556
  6 | 3b3434e38f5916fd86a14cef94060885 | 2015-06-04 11:33:37.104154
  7 | 8b2be24600a401b3d1770134243bc3b7 | 2015-06-04 11:33:37.104757
  8 | 597cc7d88f19dc58bf0be793d12514b7 | 2015-06-04 11:33:37.105326
  9 | 4b73d76881b3b33719165797b9c34534 | 2015-06-04 11:33:37.105898
 | d1b1b5ae9d22cfd132b8811bf256be94 | 2015-06-04 11:33:37.10648
(rows)


(3). Update action
In Coord1, perform the update operation:
Update T2 set info= ' bbbbbbbbbbbbbbbbbbbbbbb ' where id = 1;


Go to Datanode1,datanode2 and datanode3 to see if the data is updated:
SELECT * FROM t2 where id = 1;

book_shopping=# select * FROM t2 where id = 1;
 ID |          Info           |          Crt_time          
----+-------------------------+----------------------------
  1 | bbbbbbbbbbbbbbbbbbbbbbb | 2015-06-04 11:33:37.100742
(1 row)


(4). Delete operation
In Coord1, perform the update operation:
Delete from t2 where id = 1;

book_shopping=# Delete from t2 where id = 1;
DELETE 1


Go to Datanode1,datanode2 and datanode3 to see if the data is updated:
book_shopping=# select * FROM t2 where id = 1;
 ID | info | Crt_time
----+------+----------
(0 rows)


3. Test the stored procedure
Connect the Coord1 node with Pgadmin III, open an SQL query, enter the following and click Execute:

Create or Replace function Selectinto (ID int) returns varchar as
$BODY $
    declare
        SQL varchar;
        STR varchar;
        Re record;
    Begin
        sql = ' Select info from t1 where id= ' | | id;
        For re in Execute SQL loop
            str = str | | re.info;
        End Loop;
        return str;
    End
$BODY $
Language plpgsql;

Once executed, this function is seen in each node function directory.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.