Greenplum test environment deployment

Source: Internet
Author: User
This instance is used to deploy the experiment environment. It uses the Citrix virtualization environment and allocates three RHEL6.4 hosts.

This instance is used to deploy the experiment environment. It uses the Citrix virtualization environment and allocates three RHEL6.4 hosts.

1. Prepare three hosts

This instance is used to deploy the experiment environment. It uses the Citrix virtualization environment and allocates three RHEL6.4 hosts.

After the template is created for the Master, an additional 20 GB disk/dev/xvdb is added, and two NICs eth1 and eth2 are added.

After the Standby template is created, an additional 20 GB disk/dev/xvdb is added, and two NICs eth1 and eth2 are added.

After the Segment01 template is created, an additional 50 GB disk/dev/xvdb is added, and two NICs eth1 and eth2 are added.

Network Planning

Eth0 (external IP) eth1eth2

Master192.168.9.123172.16.10.101172.16.11.101

Standby192.168.9.122.162.16.10.102172.16.11.102

Segment01194259.125 (optional) 172.16.10.1172.16.11.1

The experiment environment resources are limited. Currently, three nodes are configured. Segment02 and Segment03. may be added as needed in the future...

Modify host name

Set the three host names of Master, Standby, and Segment01 to mdw, smdw, and sdw1 respectively.

Host Name modification method:

Hostname host name vi/etc/sysconfig/network modify hostname

Options: configure the script. It is optional to facilitate synchronization between nodes.

Export NODE_LIST = 'mdw SMDW sdw'

Vi/etc/hosts temporary Configuration

192.168.9.123 mdw192.168.9.124 smdw192.168.9.125 sdw1

Configure the first node to log on to itself and other machines without a password

Ssh-keygen-t rsassh-copy-id-I/root /. ssh/id_rsa.pub root@192.168.9.123ssh-copy-id-I/root /. ssh/id_rsa.pub root@192.168.9.124ssh-copy-id-I/root /. ssh/id_rsa.pub root@192.168.9.125cluster _ run_all_nodes "hostname; date"

Disk Planning

Gp is recommended to use the xfs file system. dependencies must be installed on all nodes.
# Rpm-ivh xfsprogs-3.1.1-10.el6.x86_64.rpm

Create/data folders for all nodes to mount the xfs File System

Mkdir/data

Mkfs. xfs/dev/xvdb

[Root @ smdb Packages], agsize = 1310720 blks =, imaxpct = 25 = blksinternal log, version = 2 = blks, none, rtextents = 0

Add the following line to vi/etc/fstab:

/Dev/xvdb/data xfs rw, noatime, inode64, allocsize = 16m1 12. disable iptables and containers "hostname; service iptables stop" cluster_run_all_nodes "hostname; chkconfig iptables off" kernel "hostname; chkconfig ip6tables off" kernel "hostname; chkconfig lib1_d off" kernel "hostname; setenforce 0 "cluster_run_all_nodes" hostname; sestatus "vi/etc/selinux/configcluster_copy_all_nodes/etc/selinux/config/etc/selinux/

Note: All nodes must be set in a unified manner. I have configured trust first and synchronized using scripts. If no configuration is available, each node must be set in sequence.

3. Set Recommended System Parameters

Vi/etc/sysctl. conf

Kernel. shmmax = 5001000000kernel. shmmni = 4096kernel. shmall = 4000000000kernelkernel. sysrq = 1kernel. core_uses_pid = 1kernel. msgmnb = 65536kernel. msgmax = 65536net. ipv4.tcp _ syncookies = 1net. ipv4.ip _ forward = 0net = 0net. ipv4.tcp _ tw_recycle = 1net. ipv4.tcp _ max_syn_backlog = 4096net = 1net = 1net. core. netdev_max_backlog = VMS. overcommit_memory = 2kernel. msgmni = 2048net

Vi/etc/security/limits. conf

* Soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072

Synchronize to each node:

Cluster_copy_all_nodes/etc/sysctl. conf/etc/sysctl. confcluster_copy_all_nodes/etc/security/limits. conf/etc/security/limits. conf

Disk pre-read parameters and deadline Algorithm

Add

Blockdev -- setra 16385/dev/xvdbecho deadline>/sys/block/xvdb/queue/schedulercluster_copy_all_nodes/etc/rc. d/rc. local/etc/rc. d/rc. local

Note: Check whether blockdev -- getra/dev/xvdb takes effect after restart.

Verify the character set of all nodes

Cluster_run_all_nodes "hostname; echo $ LANG"

Restart all nodes to verify that the modification takes effect:

Blockdev -- getra/dev/xvdb more/sys/block/xvdb/queue/schedulercluster_run_all_nodes "hostname; service iptables status" 4. install mkdir-p/data/softon the Master to upload greenplum-db-4.3.4.2-build-1-rhel5-x86_64.zip to the Masterunzip greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.zip/bin/bash greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.bin5. install and configure Greenplum on all nodes

Configure/etc/hosts

192.168.9.123 mdw172.16.10.101 mdw-1172.16.11.101 mdw-2192.168.9.124 smdw172.16.10.102 smdw-1172.16.11.102 smdw-2192.168.9.125 sdw1172.16.10.1 sdw1-1172.16.11.1 sdw1-2

Synchronize/etc/hosts configurations

Cluster_copy_all_nodes/etc/hosts

Configure mutual trust required by gp

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.