Detailed steps for creating the ipvs-XL Cluster

Source: Internet
Author: User
Tags psql

Detailed steps for creating the ipvs-XL Cluster

Recently, the company's business needs to use ipvs-XL clusters. There is not much network information about this part of knowledge. After a period of query and various detours, the installation was finally completed. Complete the detailed steps for reference. We also hope to help those who need it.

Let's get started:

Host list and role allocation for cluster Installation

10.21.13.109 GTM
10.21.13.67 coordinator & datanode
10.21.13.60 datanode

2. create ipvs users. This part of the user creation that I use ansible to complete, and the application of related software packages saves labor. (yum can actually use the method that comes with ansible, so I am a bit amateur to write ansible)

Ansible all-m shell-a "useradd postgres"
Ansible all-m shell-a 'echo "s: postgres" | chpasswd'
Ansible all-m shell-a "yum install-y flex bison readline-devel zlib-devel openjade docbook-style-dsssl gcc"

3. download the software from each node and decompress it.

Ansible all-m shell-a "wget https://www.postgres-xl.org/downloads/postgres-xl-9.5r1.6.tar.bz2"
Ansible all-m shell-a "mv/root/postgres-xl-9.5r1.6.tar.bz2/tmp; chown postgres: S/tmp/postgres-xl-9.5r1.6.tar.bz2"
Ansible all-m shell-a "tar-jxvf/tmp/postgres-xl-9.5r1.6.tar.bz2-C/home/postgres"

4. Create a pgxc path for all nodes to store ipvs files.

1 ansible all-m shell-a "su-postgres-c 'mkdir-p/home/postgres/pgxc/nodes /'"

5. Modify the. bashrc file for each node. Note: Modifying/etc/profile does not take effect.

Su-postgres
Vi. bashrc

Add the following content:

Export PGHOME =/home/postgres/pgxl
Export PGUSER = postgres
Export LD_LIBRARY_PATH = $ PGHOME/lib: $ LD_LIBRARY_PATH
Export PATH = $ PGHOME/bin: $ PATH

Make it take effect:

Source ~ /. Bashrc

6. Disable the firewall or open the corresponding port. Here I close the firewall of each virtual machine and restart them:

Service iptables stop
Chkconfig iptables off -- takes effect after restart

7. Disable SELinux:

View SELinux status:

/Usr/sbin/sestatus-v # If The SELinux status parameter is enabled, it is enabled.
SELinux status: enabled

Temporarily shut down (no need to restart the machine ):

Setenforce 0 # Set SELinux to permissive Mode
# Setenforce 1 Set SELinux to enforcing Mode

Permanently close:

Modify the/etc/selinux/config file
Change SELINUX = enforcing to SELINUX = disabled

8. Execute the following software installation steps on each node. During this process, note that if an error is reported in the middle, you must solve the problem. For example, if the gcc package is correct, the subsequent compilation will not be successful.

Cd ipvs-xl
./Configure -- prefix =/home/postgres/pgxl/
Make
Make install
Cd contrib/
Make
Make instal

9. Run the pgxc_ctl command to generate the template file for the cluster configuration:

./Pgxc_ctl --- the system will prompt Error saying there is no configuration file. Ignore it.

PGXC prepare --- execute this command to generate a configuration file template

10. Modify the corresponding host name, port, and other information according to the template. The content is as follows:

Vi ~ /Pgxc_ctl/pgxc_ctl.conf
PgxcInstallDir = $ HOME/pgxc
PgxcOwner = $ USER
PgxcUser = $ pgxcOwner
TmpDir =/tmp
LocalTmpDir = $ tmpDir
ConfigBackup = n
ConfigBackupHost = pgxc-linker
ConfigBackupDir = $ HOME/pgxc
ConfigBackupFile = pgxc_ctl.bak
GtmName = gtm
GtmMasterServer = 10.21.13.109
GtmMasterPort = 20001
GtmMasterDir = $ HOME/pgxc/nodes/gtm
GtmExtraConfig = none
GtmMasterSpecificExtraConfig = none
GtmSlave = n
GtmProxyDir = $ HOME/pgxc/nodes/gtm_pxy
GtmProxy = y
GtmProxyNames = (gtm_pxy1 gtm_pxy2)
GtmProxyServers = (10.21.13.67 10.21.13.60)
GtmProxyPorts = (20001 20001)
GtmProxyDirs = ($ gtmProxyDir)
GtmPxyExtraConfig = none
GtmPxySpecificExtraConfig = (none)
CoordMasterDir = $ HOME/pgxc/nodes/coord
CoordSlaveDir = $ HOME/pgxc/nodes/coord_slave
CoordArchLogDir = $ HOME/pgxc/nodes/coord_archlog
CoordNames = (coord1)
CoordPorts = (20004)
PoolerPorts = (20010)
CoordPgHbaEntries = (10.21.20./ 25) <
CoordMasterServers = (10.21.13.67)
CoordMasterDirs = ($ coordMasterDir)
CoordMaxWALsernder = 5
CoordMaxWALSenders = ($ coordMaxWALsernder)
CoordSlave = n <not using slave
CoordSlaveSync = y
CoordSlaveServers = (10.21.13.60 10.21.13.67)
CoordSlavePorts = (20004 20005 20004 20005)
CoordSlavePoolerPorts = (20010 20011 20010 20011)
CoordSlaveDirs = ($ coordSlaveDir)
CoordArchLogDirs = ($ coordArchLogDir)
CoordExtraConfig = coordExtraConfig
Cat> $ coordExtraConfig <EOF
Log_destination = 'stderr'
Logging_collector = on
Log_directory = 'pg _ Log'
Listen_addresses = '*'
Max_connections = 100
EOF
CoordSpecificExtraConfig = (none)
CoordExtraPgHba = none
CoordSpecificExtraPgHba = (none)
CoordAdditionalSlaves = n
CoordAdditionalSlaveSet = (cad1)
Cad1_Sync = n
Cad1_Servers = (10.21.13.67 10.21.13.60)
Cad1_dir = $ HOME/pgxc/nodes/coord_slave_cad1
Cad1_Dirs = ($ cad1_dir $ cad1_dir)
Cad1_ArchLogDir = $ HOME/pgxc/nodes/coord_archlog_cad1
Cad1_ArchLogDirs = ($ cad1_ArchLogDir $ cad1_ArchLogDir)
DatanodeMasterDir = $ HOME/pgxc/nodes/dn_master
DatanodeSlaveDir = $ HOME/pgxc/nodes/dn_slave
DatanodeArchLogDir = $ HOME/pgxc/nodes/datanode_archlog
PrimaryDatanode = datanode1
DatanodeNames = (datanode1 datanode2)
DatanodePorts = (20008 20008) # Master ports
DatanodePoolerPorts = (20012 20012)
DatanodePgHbaEntries = (10.21.20./ 25) <very important
DatanodeMasterServers = (10.21.13.67 10.21.13.60)
DatanodeMasterDirs = ($ datanodeMasterDir)
DatanodeMaxWalSender = 5
DatanodeMaxWALSenders = ($ datanodeMaxWalSender)
DatanodeSlave = n <not using slave
DatanodeSlaveServers = (10.21.13.60 10.21.13.67)
DatanodeSlavePorts = (20008 20008)
DatanodeSlavePoolerPorts = (20012 20012)
DatanodeSlaveSync = y
DatanodeSlaveDirs = ($ datanodeSlaveDir)
DatanodeArchLogDirs = ($ datanodeArchLogDir)
DatanodeExtraConfig = none
DatanodeSpecificExtraConfig = (none)
DatanodeExtraPgHba = none
DatanodeSpecificExtraPgHba = (none)
DatanodeAdditionalSlaves = n
WalArchive = n <do not enable wal Archive
WalArchiveSet = (war1 war2)
Warrentsource = (master)
Walflag source = (slave)
Walflag source = (additiona_coordinator_slave_set additional_datanode_slave_set)
Warehouse host = node10
Warrentbackupdir = $ HOME/pgxc/backup_war1
Wal2_source = (master)
War2_host = node11
War2_backupdir = $ HOME/pgxc/backup_war2

11. For the initial command, if there are no errors, this is the last step. PGXL will complete all the deployment work.

Pgxc_ctl-c pgxc_ctl.conf init all

The complete output is as follows:

[Postgres @ ip-10-21-13-109 pgxc_ctl] $ pgxc_ctl-c pgxc_ctl.conf init all
/Bin/bash
Installing pgxc_ctl_bash script as/home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as/home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using/home/postgres/pgxc_ctl/pgxc_ctl_bash -- home/postgres/pgxc_ctl -- configuration/home/S/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
* ******* PGXC_CTL START ***************
Current directory:/home/postgres/pgxc_ctl
Initialize GTM master
ERROR: target directory (/home/postgres/pgxc/nodes/gtm) exists and not empty. Skip GTM initilialization
1: 3768231680: 19:35:02. 983 CST-FATAL: lock file "gtm. pid" already exists
2: 3768231680: 19:35:02. 983 CST-HINT: Is another GTM (PID 14635) running in data directory "/home/S/pgxc/nodes/gtm "?
LOCATION: CreateLockFile, main. c: 2099
Waiting for server to shut down... done
Server stopped
Done.
Start GTM master
Server starting
Initialize all the gtm proxies.
Initializing gtm proxy gtm_pxy1.
Initializing gtm proxy gtm_pxy2.
Waiting for server to shut down... done
Server stopped
The files belonging to this GTM system will be owned by user "s ".
This user must also own the server process.
Fixing permissions on existing directory/home/postgres/pgxc/nodes/gtm_pxy... OK
Creating configuration files... OK
Success.
Waiting for server to shut down... done
Server stopped
The files belonging to this GTM system will be owned by user "s ".
This user must also own the server process.
Fixing permissions on existing directory/home/postgres/pgxc/nodes/gtm_pxy... OK
Creating configuration files... OK
Success.
Done.
Starting all the gtm proxies.
Starting gtm proxy gtm_pxy1.
Starting gtm proxy gtm_pxy2.
Server starting
Server starting
Done.
Initialize all the coordinator masters.
Initialize coordinator master coord1.
ERROR: target coordinator master coord1 is running now. Skip initilialization.
Done.
Starting coordinator master.
Starting coordinator master coord1
ERROR: target coordinator master coord1 is already running now. Skip initialization.
Done.
Initialize all the datanode masters.
Initialize the datanode master datanode1.
Initialize the datanode master datanode2.
The files belonging to this database system will be owned by user "stored s ".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8 ".
The default database encoding has accordingly been set to "UTF8 ".
The default text search configuration will be set to "english ".
Data page checksums are disabled.
Fixing permissions on existing directory/home/postgres/pgxc/nodes/dn_master... OK
Creating subdirectories... OK
Selecting default max_connections... 100
Selecting default shared_buffers... 128 MB
Selecting dynamic shared memory implementation... posix
Creating configuration files... OK
Creating template1 database in/home/postgres/pgxc/nodes/dn_master/base/1... OK
Initializing pg_authid... OK
Initializing dependencies... OK
Creating system views... OK
Creating cluster information... OK
Loading system objects 'descriptions... OK
Creating collations... OK
Creating conversions... OK
Creating dictionaries... OK
Setting privileges on built-in objects... OK
Creating information schema... OK
Loading PL/pgSQL server-side language... OK
Vacuuming database template1... OK
Copying template1 to template0... OK
Copying template1 to copying s... OK
Syncing data to disk... OK
Freezing database template0... OK
Freezing database template1... OK
Freezing database postgres... OK
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_0000.conf or using the option-A, or
-- Auth-local and -- auth-host, the next time you run initdb.
Success.
The files belonging to this database system will be owned by user "stored s ".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8 ".
The default database encoding has accordingly been set to "UTF8 ".
The default text search configuration will be set to "english ".
Data page checksums are disabled.
Fixing permissions on existing directory/home/postgres/pgxc/nodes/dn_master... OK
Creating subdirectories... OK
Selecting default max_connections... 100
Selecting default shared_buffers... 128 MB
Selecting dynamic shared memory implementation... posix
Creating configuration files... OK
Creating template1 database in/home/postgres/pgxc/nodes/dn_master/base/1... OK
Initializing pg_authid... OK
Initializing dependencies... OK
Creating system views... OK
Creating cluster information... OK
Loading system objects 'descriptions... OK
Creating collations... OK
Creating conversions... OK
Creating dictionaries... OK
Setting privileges on built-in objects... OK
Creating information schema... OK
Loading PL/pgSQL server-side language... OK
Vacuuming database template1... OK
Copying template1 to template0... OK
Copying template1 to copying s... OK
Syncing data to disk... OK
Freezing database template0... OK
Freezing database template1... OK
Freezing database postgres... OK
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_0000.conf or using the option-A, or
-- Auth-local and -- auth-host, the next time you run initdb.
Success.
Done.
Starting all the datanode masters.
Starting datanode master datanode1.
Starting datanode master datanode2.
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log ".
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log ".
Done.
Alter node coord1 WITH (HOST = '10. 21.13.67 ', PORT = 20004 );
ALTER NODE
Create node datanode1 WITH (TYPE = 'datanode', HOST = '10. 21.13.67 ', PORT = 20008, PRIMARY, PREFERRED );
ERROR: PGXC Node datanode1: object already defined
Create node datanode2 WITH (TYPE = 'datanode', HOST = '10. 21.13.60 ', PORT = 20008 );
ERROR: PGXC Node datanode2: object already defined
SELECT pgxc_pool_reload ();
Pgxc_pool_reload
------------------
T
(1 row)
Done.
Execute direct on (datanode1) 'create NODE coord1 WITH (TYPE = ''ordinator '', HOST = ''10. 21.13.67'', PORT = 20004 )';
EXECUTE DIRECT
Execute direct on (datanode1) 'alter NODE datanode1 WITH (TYPE = ''datanode'', HOST = ''10. 21.13.67 '', PORT = 20008, PRIMARY, PREFERRED )';
EXECUTE DIRECT
Execute direct on (datanode1) 'create NODE datanode2 WITH (TYPE = ''datanode'', HOST = ''10. 21.13.60 '', PORT = 20008 )';
EXECUTE DIRECT
Execute direct on (datanode1) 'select pgxc_pool_reload ()';
Pgxc_pool_reload
------------------
T
(1 row)
Execute direct on (datanode2) 'create NODE coord1 WITH (TYPE = ''ordinator '', HOST = ''10. 21.13.67'', PORT = 20004 )';
EXECUTE DIRECT
Execute direct on (datanode2) 'create NODE datanode1 WITH (TYPE = ''datanode'', HOST = ''10. 21.13.67 '', PORT = 20008, PRIMARY, PREFERRED )';
EXECUTE DIRECT
Execute direct on (datanode2) 'alter NODE datanode2 WITH (TYPE = ''datanode'', HOST = ''10. 21.13.60 '', PORT = 20008 )';
EXECUTE DIRECT
Execute direct on (datanode2) 'select pgxc_pool_reload ()';
Pgxc_pool_reload
------------------
T
(1 row)
Done.

12. verify availability

Local Connection Mode on the coordinator Node

Psql-p 20004

Remote User connection

Psql-h 10.21.13.67-p 20004-U postgres

Postgres = # select * from pgxc_node;
Node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
----------- + ------------- + ---------------- + ------------------ + ------------
Datanode1 | D | 20008 | 10.21.13.67 | t | 888802358
Datanode2 | D | 20008 | 10.21.13.60 | f |-905831925
Coord1 | C | 20004 | 10.21.13.67 | f | 1885696643

13. Table creation Test

Postgres = # create table test (id integer, name varchar (10 ));

S = # insert into test (id, name) values (1, 'test ');

S = # insert into test (id, name) values (2, 'test ');

Query Result

S = # select * from test;
Id | name
---- + ------
1 | test
2 | test
(2 rows)

Problem summary:

1. If you log on to the database and execute the following statement and find that only coord information is found, it indicates that the database has not been created successfully,

Postgres = # select * from pgxc_node;
Node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
----------- + ---------------- + ------------------ + ------------
Coord3 | C | 20004 | localhost | f | 1638403545
(1 row)

The problem is that the access control of coordPgHbaEntries = (10.21.20./ 25) and datanodePgHbaEntries = (10.21.20./ 25) must match the actual subnet mask, otherwise, the communication between each datanode node and coord node fails.

I am troubled by this problem for a week because I did not pay attention to the details. I have been searching for the reason why datanode cannot identify each other. It is actually a very small point.

[Postgres @ ip-10-21-13-109: pgxc_ctl] $ ifconfig
Eth0 Link encap: Ethernet HWaddr 02: 57: E7: 56: 4C: 70
Inet addr: 10.21.13.109 Bcast: 10.21.13.127 Mask: 255.255.255.128 <10.21.20./ 25
Inet6 addr: fe80: 57: e7ff: fe56: 4c70/64 Scope: Link
Up broadcast running multicast mtu: 9001 Metric: 1
RX packets: 29500 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 28538 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 3113614 (2.9 MiB) TX bytes: 4653853 (4.4 MiB)
Interrupt: 160

2. If you encounter a database creation failure, the above access control permission problem also occurs. After the deployment is complete, this parameter is set in/home/postgres/pgxc/nodes/dn_master/pg_hba.conf and/home/postgres/pgxc/nodes/coord/pg_hba.conf. But re-init will overwrite these files.

S = # create database test;
ERROR: Failed to get pooled connections
HINT: This may happen because one or more nodes are currently unreachable, either because of node or network failure.
Its also possible that the target node may have hit the connection limit or the pooler is configured with low connections.
Please check if all nodes are running fine and also review max_connections and max_pool_size configuration parameters

3. many people install coord and datanode by manually executing the following command. In fact, if the previous pgxc_ctl.conf configuration is correct, the script is automatically completed for you and there is no need for manual intervention.

Create node coord1 with (type = coordinator, host = 'c6702', port = 20004 );
Create node coord2 with (type = coordinator, host = 'c6703', port = 20005 );
Alter node coord3 with (type = coordinator, host = 'c6704', port = 20004 );
Create node datanode1 with (type = datanode, host = 'c6702', port = 20008 );
Create node datanode2 with (type = datanode, host = 'c6703', port = 20008 );
Alter node datanode3 with (type = datanode, host = 'c6704', port = 20008 );

4. in step 2, make PGXL must carefully check the output and correct any errors. Otherwise, you will find that many command software has not been installed successfully.

5. If you encounter the following problems during remote login to postgres SQL

[Root @ c6703 ~] # Psql-h 10.21.13.67-p 20004-U postgres
Psql: FATAL: no pg_hba.conf entry for host "172.16.32.116", user "postgres", database "postgres"

Modify Access Control in/home/postgres/pgxc/nodes/dn_master/pg_hba.conf and/home/postgres/pgxc/nodes/coord/pg_hba.conf to allow access from the corresponding IP address, you can log on. Note: You must modify all the pg_hba.conf configuration files of coordinator and datanode. Then, the postgres user executes the reload command to make the configuration file take effect. Or restart to make the configuration take effect.

[Root @ c6703 ~] # Psql-h 10.21.13.67-p 20004-U test
S = # SELECT pg_reload_conf ();
[Root @ c6703 ~] # Psql-h 10.21.13.67-p 20004-U test
Password for user test:
Psql (PGXL 9.5r1.6, based on PG 9.5.8 (S-XL 9.5r1.6 ))
Type "help" for help.

The following provides text files in steps 10th and 11, which can be downloaded, modified, and used. You can download it from the help house resource station:

------------------------------------------ Split line ------------------------------------------

Free in http://linux.bkjia.com/

The username and password are both www.bkjia.com

The specific download directory is available in/July 6,/July 4,/Postgres-XL cluster creation steps/

For the download method, see

------------------------------------------ Split line ------------------------------------------

This article permanently updates link: https://www.bkjia.com/Linux/2018-03/151184.htm

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.