To build a GlusterFS cluster and write an automated installation and configuration script, you only need to specify the IP address list of all nodes and the volume information you need to configure to compile, install, and deploy the entire cluster on one machine, remote Operations are completed through sshpass.
#!/bin/bash# Author dysj4099@gmail.com###############Initialization################PKG_PATH=/opt/files/glusterfs-3.4.0.tar.gzROOT_PASS=test# Gluster peersNODES=(192.168.64.87 192.168.64.88)# Gluster volumesvol_1=(nova_vol /opt/nova_vol 192.168.64.87,192.168.64.88)VOLUMES=(vol_1)############################################## Get MY_IPif [ "${MY_IP}" == "" ];then MY_IP=$(python -c "import socket;socket=socket.socket();socket.connect(('8.8.8.8',53));print socket.getsockname()[0];")fi# Step 1. Install sshpassapt-get install sshpass -y# Step 2. Compile and install glusterfs on each node.cd /tmp && tar xf ${PKG_PATH}cat > /tmp/tmp_install_gfs.sh << _wrtend_#!/bin/bashapt-get -y --force-yes purge glusterfs-server glusterfs-commonps ax|grep gluster|grep -v grep|awk '{print $1}'|xargs -L 1 killapt-get -y --force-yes install libssl-dev flex bisonrm -rf /var/lib/glusterd || trueif [ ! -x /usr/local/sbin/glusterd ];then cd /tmp/glusterfs-3.4.0 && ./configure && make && make install cd /tmp && rm -rf /tmp/glusterfs-3.4.0 ldconfig && update-rc.d -f glusterd defaultsfiservice glusterd restartsleep 5rm -rf /tmp/glusterfs-3.4.0rm /tmp/tmp_install_gfs.sh_wrtend_for node in ${NODES[@]}; do if [ "${MY_IP}" != "$node" ];then echo $node install start sshpass -p ${ROOT_PASS} scp -o StrictHostKeyChecking=no -r /tmp/glusterfs-3.4.0 ${node}:/tmp/glusterfs-3.4.0 sshpass -p ${ROOT_PASS} scp -o StrictHostKeyChecking=no /tmp/tmp_install_gfs.sh ${node}:/tmp/ sshpass -p ${ROOT_PASS} ssh -o StrictHostKeyChecking=no root@${node} /bin/bash /tmp/tmp_install_gfs.sh echo $node install end fidone/bin/bash tmp_install_gfs.sh# Step 3. Attach peerfor node in ${NODES[@]}; do if [ "${MY_IP}" != "$node" ];then /usr/local/sbin/gluster peer probe ${node} fi donesleep 15# Step 4. Verify attach status and create volumesconn_peer_num=`/usr/local/sbin/gluster peer status | grep Connected | wc -l`conn_peer_num=`expr $conn_peer_num + 1`if [ ${conn_peer_num} -eq ${#NODES[@]} ];then echo "All peers have been attached." for vol in ${VOLUMES[@]};do eval vol_info=(\${$vol[@]}) eval vol_nodes=(${vol_info[2]//,/ }) vol_path="" for node in ${vol_nodes[@]};do vol_path=$vol_path$node:${vol_info[1]}" " done # create volume /usr/local/sbin/gluster volume create ${vol_info[0]} replica 2 ${vol_path} # start volume /usr/local/sbin/gluster volume start ${vol_info[0]} done else echo "Attach peers error" exit 0fi
The script is simple. You need to enter the cluster information by yourself:
###############Initialization################PKG_PATH=/opt/files/glusterfs-3.4.0.tar.gzROOT_PASS=test# Gluster peersNODES=(192.168.64.87 192.168.64.88)# Gluster volumesvol_1=(nova_vol /opt/nova_vol 192.168.64.87,192.168.64.88)VOLUMES=(vol_1)#############################################
PKG_PATH is the installation package path
ROOT_PASS is the root password of each node (the password must be set to the same, and ssh connection is required for remote operations in the future)
NODES specifies the node IP address list of the cluster (separated by spaces)
Vol_1 specifies the volume information to be created, including the volume name (nova_vol), data path (/opt/nova_vol), and bricks IP address list (separated by commas ), you can enter multiple VOLUMES and specify the VOLUMES in VOLUMES.
After reading the cluster information, the script obtains the local address, decompress the source file, generate the local installation script, copy it to the/tmp directory of each node, and execute the script, modify the name of the decompressed directory and specify the name of the glusterFS software in the local installation script.
After the compilation and installation are complete, attach peers will be performed to create and start the volume. Please modify the parameters during the volume creation process.