Zhou clan, finishing technical documents, for the next generation to stay a little tutorial ...
Application Scenarios
Project HTTP uses a high-availability keepalive, dual-entry access, then there is a problem, each server Web file or interface, need two units are simultaneously synchronized, so, think of file sharing.
Glusterfs is in the premise of NFS, has been upgraded, and recently shared to the Apache Foundation, developed quickly, the latest version of 3.5.x is relatively mature, fixed a lot of bugs, many well-known operators are using it, especially the network disk, As I know, Sogou is one of the most used operators.
Network structure
192.168.1.202 Glusterfs-server back-end storage
192.168.1.203 Glusterfs-server back-end storage
192.168.1.203 Glusterfs-client Front-end Call (Mount)
Server System
CentOS Release 6.3 (Final) x64bit
Installation steps:
------------- below are the configurations of the two back-end stores, which are configured differently and will be described separately -----------
1. Join Gluster Yum Source to get the latest version
cd/etc/yum.repos.d/ wget Http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.1/EPEL.repo/glusterfs-epel.repo |
If this link does not work, then you do one, is very simple things, as follows
[email protected] yum.repos.d]# cat Glusterfs-epel.repo # Place this file in Your/etc/yum.repos.d/directory
[Glusterfs-epel] Name=glusterfs is a clustered file-system capable of the scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/latest/epel.repo/epel-$releasever/$basearch/ Enabled=1 Skip_if_unavailable=1 Gpgcheck=1 Gpgkey=http://download.gluster.org/pub/gluster/glusterfs/latest/epel.repo/pub.key
[Glusterfs-noarch-epel] Name=glusterfs is a clustered file-system capable of the scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/latest/epel.repo/epel-$releasever/noarch Enabled=1 Skip_if_unavailable=1 Gpgcheck=1 Gpgkey=http://download.gluster.org/pub/gluster/glusterfs/latest/epel.repo/pub.key
[Glusterfs-source-epel] Name=glusterfs is a clustered file-system capable of the scaling to several petabytes. -Source baseurl=http://download.gluster.org/pub/gluster/glusterfs/latest/epel.repo/epel-$releasever/srpms Enabled=0 Skip_if_unavailable=1 Gpgcheck=1 Gpgkey=http://download.gluster.org/pub/gluster/glusterfs/latest/epel.repo/pub.key |
2. Check if Yum source is loaded successfully
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/44/7C/wKiom1Ph0BrwbnDpAAJvxDXWqXo984.jpg "title=" QQ picture 20140806143925.jpg "alt=" wkiom1ph0brwbndpaajvxdxwqxo984.jpg "/> is obvious, has been loaded successfully.
3, then start Yum, very simple
Yum-y Install Glusterfs-server
4, yum finish, install the Glusterfs, will be/etc/init.d/below the two configuration boot files
Glusterd GLUSTERFSD//One by one start it to
[[email protected] ~]#/etc/init.d/glusterd start &&/etc/init.d/glusterfsd start
5. Next, under Hosts, add the Ip/hostname correspondence of two back-end storage
[Email protected]_server ~]# vi/etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 :: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.202 server202 192.168.1.203 server203 |
6, respectively, from two machines, add the other node, such as 202 add 203,203 Add 202
Front is 202 add 203 [Email protected]_server ~]# Gluster peer probe server203 Peer Probe:success. [[Email protected]_server ~]# gluster Peer status Number of Peers:1
hostname:server203 Uuid:04aa2530-379b-40ff-9fc1-66552b1defe7 State:peer in Cluster (Connected)
Here are 203 additions to the 202 [Email protected] ~]# Gluster peer probe server202 Peer Probe:success. [Email protected] ~]# Gluster peer status Number of Peers:1
hostname:server202 Uuid:05b8fede-9576-4ffa-b525-40c1218d8bac State:peer in Cluster (Connected) [Email protected] ~]#
|
Gluster Peer status is the View node state
7, create a home directory, used as the root directory of back-end storage, we are under/data
Mkdir/data Chmod-r 1777/data |
8, casually find a back-end storage node, 202 or 203, whatever you, then two commands, side can create volume
Gluster volume Create Zhou Replica 2 transport TCP Server202:/data Server203:/data force Gluster Volume Start Zhou |
Note: 8.1 Only need one to create, not both are created, otherwise you will be prompted volume Zhou already exists
Volume Create:zhou:failed:Volume Zhou already exists |
8.2 "Replica 2" He means I'm going to create redundant data, that is, the data written by the front end will be written to 202 and 203 at the same time, if this parameter is not added, the front-end mount will only write to one of them and will not synchronize
8.3 In the back, must add force, otherwise it will prompt you, must add, the same reason, remove the time, you have to add force
9, if, we just 8 of the steps, is created in 202 above, then, we come 203 look at the situation
[Email protected] ~]# Gluster volume status Status of Volume:zhou Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick server202:/data 49153 Y 45883 Brick server203:/data 49153 Y 45012 NFS Server on localhost 2049 Y 45026 Self-heal Daemon on localhost n/a Y 45030 NFS Server on server202 2049 Y 45897 Self-heal Daemon on server202 N/a Y 45901 Task Status of Volume Zhou ------------------------------------------------------------------------------ There is no active volume tasks |
It's clear that the cluster has started.
-----------------------------the following are the configuration------for the front-end Mount-----------
10, the same reason, to load the Yum source, there is not much to say
11. Start the installation of glusterfs-client
Yum-y Install Glusterfs-client
12. After Yum finishes, you can use this command to mount the
Mount-t Glusterfs server202:/zhou/data/
Note: Before mounting, the first to edit the host corresponding IP, otherwise how to know what you this server202 is the plane, but also to create a/data file, so as to mount, the full command is as follows:
Mkidr/data Chmod-r 1777/data Mount-t Glusterfs server202:/zhou/data/ |
---------------------------- Below is the front end test effect ----------------
13, in 204, create a file to/data, for the sake of protection, create two consecutive
14, in 202,ll/data, under normal circumstances will see the 13 steps created by the file, can not see is your problem
15, in 203,ll/data, under normal circumstances will see the 13 steps created by the file, can not see is your problem
16, put 202 off, in 204, create a file, at this point you will find that the creation will be a bit slow, but it doesn't matter, the same will create success
17, to 203,ll/data, under normal circumstances will see the 16 steps created by the file, can not see is your problem
18, in 17 steps, when 202 crashes, in 204 to create a file a bit slow, not urgent, and then create a few files try, you will find that this will soon, this is normal situation, because 202 hung up, you create in 204 file, it will automatically create into 203
19, put 202 power on, you will find that 202 hangs out of this one period, 203 created files, 202 has been synced over
20, the same reason, you put 203 off the machine to try, the effect of the same
21. High Availability Cluster Build success
-------------------------- Below is the front end mate HHTP Test download -------------------------
22, yum install httpd, here don't say more
Yum-y Install httpd
23. Modify HTTPD Default directory
Vi/etc/httpd/conf/httpd.conf
Modify DocumentRoot "/data"
Restart httpd restart
24, randomly create a file in 204
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/44/81/wKioL1Ph2YvQdHEhAAFwiBim7eI352.jpg "title=" QQ picture 20140806151508.jpg "alt=" Wkiol1ph2yvqdhehaafwibim7ei352.jpg "/>
25, the Web page open a bit http://192.168.1.204/good.html
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/44/81/wKiom1Ph2MSB60cQAAD8mg7yVfM447.jpg "title=" QQ picture 20140806151620.jpg "alt=" Wkiom1ph2msb60cqaad8mg7yvfm447.jpg "/>
You will find that success, in other words, after the Apache file, directly on the back-end storage can, do not need two of the same HTML file, save a lot of useless.
Of course, in addition to storing HTML files, you can also put other things you want, such as MP3, movies, and so on the mess ...
--------------------------- Below is a collection of frequently asked questions --------------------------------------
[Email protected] ~]# Gluster peer status
Connection failed. Please check the If Gluster daemon is operational.
Trouble, you turn the process up first/etc/init.d/glusterd start &&/etc/init.d/glusterfsd start
[Email protected] ~]# Gluster peer probe server202
Peer Probe:failed:Probe returned with unknown errno 107
Hot pot, please close iptables firewall, thank you
[Email protected]_server ~]# gluster volume Remove-brick Zhou Server202:/data
warning:running Remove-brick commands without an explicit option is deprecated, and would be removed in the next version O F GlusterFS.
To forcibly remove a brick with the next version of GlusterFS, you'll need to use "Remove-brick force".
Whether you create a volume or remove it, add one force to the back
--------------------------- Below is a collection of frequently asked questions --------------------------------------
Help and see what management tools you have.
[Email protected]_server ~]# gluster Volume help
volume info [all|<volname>]-list information of all volumes
Volume Create <NEW-VOLNAME> [stripe <count>] [replica <count>] [transport <tcp|rdma|tcp,rdma>] <new-brick>?<vg_name>, ..... [Force]-Create a new volume of specified type with mentioned bricks
Volume Delete <VOLNAME>-delete volume specified by <VOLNAME>
Volume start <VOLNAME> [Force]-Start volume specified by <VOLNAME>
Volume stop <VOLNAME> [Force]-stop volume specified by <VOLNAME>
Volume Add-brick <VOLNAME> [<stripe|replica> <count>] <NEW-BRICK> ... [Force]-add brick to volume <VOLNAME>
Volume Remove-brick <VOLNAME> [replica <count>] <BRICK> ... [Start|stop|status|commit|force]-Remove brick from volume <VOLNAME>
Volume rebalance <VOLNAME> [fix-layout] {start|stop|status} [force]-rebalance operations
Volume Replace-brick <VOLNAME> <BRICK> <NEW-BRICK> {start [force]|pause|abort|status|commit [Force] }-Replace-brick Operations
Volume set <VOLNAME> <KEY> <VALUE>-Set options for volume <VOLNAME>
Volume Help-display Help for the volume command
Volume log rotate <VOLNAME> [BRICK]-Rotate the log file for corresponding Volume/brick
Volume sync <HOSTNAME> [All|<volname>]-Sync The volume information from a peer
Volume reset <VOLNAME> [option] [Force]-Reset all the reconfigured options
Volume profile <VOLNAME> {Start|stop|info [NFS]}-Volume profile operations
Volume quota <VOLNAME> {enable|disable|list [<path> ...]| Remove <path>| Default-soft-limit <percent>} |
Volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
Volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}-Quota Translator specific Operations
Volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [Nfs|brick <brick>] [list-cnt <value>] |
Volume top <VOLNAME> {read-perf|write-perf} [BS <size> count <count>] [brick <brick>] [list-cnt & Lt;value>]-Volume top operations
Volume Status [All | <VOLNAME> [Nfs|shd|<brick>|quotad]] [detail|clients|mem|inode|fd|callpool|tasks]- Display status of all or specified volume (s)/brick
Volume Heal <VOLNAME> [{full | statistics {heal-count {replica
Volume Statedump <VOLNAME> [Nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] ...-Perform statedump on Bricks
Volume list-list all volumes in cluster
Volume clear-locks <VOLNAME> <path> Kind {blocked|granted|all}{inode [range]|entry [basename]|posix] [range ]}-Clear locks held on path
The role of each parameter, at a glance, you don't need me to explain it
This article from the "Zhou Clan" blog, declined to reprint!