650) this.width=650; "Src=" https://s4.51cto.com/wyfs02/M00/8F/A5/wKioL1jnZWfRa_IfAAELdWNslV0210.png-wh_500x0-wm_ 3-wmp_4-s_4007957619.png "title=" 20170103142436561.png "alt=" Wkiol1jnzwfra_ifaaeldwnslv0210.png-wh_50 "/>
You can see that there are four components: MONGOs, config server, shard, replica set.
MONGOs: The entrance to the database cluster request, all requests are coordinated through MONGOs, no need to add a route selector to the application, MONGOs is a request distribution center, It is responsible for forwarding the corresponding data request request to the corresponding Shard server. In a production environment there is usually more mongos as the entrance to the request, preventing one of the other MongoDB requests from being hung out of operation.
Configserver: As the name implies for configuration servers, the configuration of all database meta information (routing, sharding) is stored. The mongos itself does not have a physical storage Shard server and data routing information, but is cached in memory, and the configuration server actually stores the data. MONGOs the first boot or shutdown reboot will load configuration information from config server, and if configuration server information changes will notify all MONGOs to update their status, so that MONGOs can continue to route accurately. In a production environment there are usually multiple config server configuration servers, because it stores the metadata of the Shard route, this can not be lost! Even if you hang one of them, as long as there is inventory, the MongoDB cluster will not hang off.
Shard: This is the legendary shard. The above mentioned a machine even if the ability to have a large ceiling, as the military war, a person can drink blood bottle also can not spell the other one's division. As the saying goes Three stooges the top of Zhuge Liang, this time the strength of the team is highlighted. In the Internet, too, a common machine can do more than one machine to do.
650) this.width=650; "Src=" https://s4.51cto.com/wyfs02/M01/8F/A7/wKiom1jnaYTwUmUsAAF0vqWbeGE555.png-wh_500x0-wm_ 3-wmp_4-s_412962463.png "title=" 20170103142905360.png "alt=" Wkiom1jnaytwumusaaf0vqwbege555.png-wh_50 "/>
We asked to do 6 pieces of cluster reference above 3-slice picture data storage using memory storage disk to increase speed (machine configuration 32h 260GB memory).
One, 3 machines set up the preparation work
Yum-y Install Numactl vim Lrzsz
Mkdir-p/data/{work,app}
Mkdir-p/data/work/mongodb/conf
resizing storage space
umount/dev/shm/
Mount Tmpfs/dev/shm-t Tmpfs-o size=200g
[Email protected] work]# df-h
Filesystem Size used Avail use% mounted on
/dev/sda3 78G 1.1G 73G 2%/
/DEV/SDA1 485M 31M 429M 7%/boot
/DEV/SDB2 3.6T 33M 3.6T 1%/data
/dev/sda2 197G 267M 187G 1%/Home
Tmpfs 200G 0 200G 0%/dev/shm
cd/data/work/mongodb/
mkdir {Shard1,shard2,shard3,shard4,shard5,shard6,server,mongos}
cd/dev/shm/
mkdir {Shard1,shard2,shard3,shard4,shard5,shard6,server}
Second start sharding service
wget http://10.31.67.32:8099/Download/mongodb/mongodb-linux-x86_64-rhel62-3.4.2.tgz
TAR-ZXVF mongodb-linux-x86_64-rhel62-3.4.2.tgz
MV Mongodb-linux-x86_64-rhel62-3.4.2/data/app/mongodb
# # # # # # # # #开启验证需要 generate kefile files on any server:
OpenSSL rand-base64 741 > KeyFile
chmod keyfile
Note: To upload to each server in the cluster:/data/work/mongodb/mongo-keyfile
Cd/data/work/mongodb/conf
The configuration file Upload
Cat/data/work/mongodb/conf/shard1.conf
Storage
DbPath:/dev/shm/shard1
Journal
Enabled:true
Directoryperdb:true
#syncPeriodSecs: 60
Engine:wiredtiger
Processmanagement:
Fork:true
Pidfilepath:/data/work/mongodb/shard1/mongod.pid
Net
port:27011
http
Enabled:false
Systemlog:
Destination:file
Path:/data/work/mongodb/shard1/mongod.log
Logappend:true
Operationprofiling:
slowopthresholdms:100
Mode:slowop
User authentication is required to open
# # #security:
# # KeyFile:/data/work/mongodb/mongo-keyfile
#authorization: Enabled
Replication
oplogsizemb:20000
replsetname:rs001
Each profile log path and storage path can be changed
3 Machines never start.
Numactl--interleave=all/data/app/mongodb/bin/mongod--shardsvr-f/data/work/mongodb/conf/shard1.conf
Numactl--interleave=all/data/app/mongodb/bin/mongod--shardsvr-f/data/work/mongodb/conf/shard2.conf
Numactl--interleave=all/data/app/mongodb/bin/mongod--shardsvr-f/data/work/mongodb/conf/shard3.conf
Numactl--interleave=all/data/app/mongodb/bin/mongod--shardsvr-f/data/work/mongodb/conf/shard4.conf
Numactl--interleave=all/data/app/mongodb/bin/mongod--shardsvr-f/data/work/mongodb/conf/shard5.conf
Numactl--interleave=all/data/app/mongodb/bin/mongod--shardsvr-f/data/work/mongodb/conf/shard6.conf
All of the above 3 machines are executed
Let the master-slave arbitration be on each machine separately
Host |
rs001 |
rs002 |
rs003 |
rs004 |
rs005 |
rs006 |
10.33.100.118 |
Main |
Zhong |
From |
Main |
Zhong |
From |
10.33.100.118 |
From |
Main |
Zhong |
From |
Main |
Zhong |
10.33.100.119 |
Zhong |
From |
Main |
Zhong |
From |
Main |
Log in Port 27011-27016
/data/app/mongodb/bin/mongo--port 27016
cfg={_id: "rs006", members:[{_id:0,host: ' 10.33.100.119:27016 ', priority:2}, {_id:1,host: ' 10.33.100.117:27016 ', Priority:1},{_id:2,host: ' 10.33.100.118:27016 ', Arbiteronly:true}]};
Rs.initiate (CFG)
Rs.status ()
Three-start Configuration service
[Email protected] ~]# cat/data/work/mongodb/conf/server.conf
Storage
DbPath:/dev/shm/server
Journal
Enabled:true
Directoryperdb:true
#syncPeriodSecs: 60
Engine:wiredtiger
Processmanagement:
Fork:true
Pidfilepath:/data/work/mongodb/server/mongod.pid
Net
port:27020
http
Enabled:false
Systemlog:
Destination:file
Path:/data/work/mongodb/server/mongod.log
Logappend:true
Replication
Replsetname:configreplset
/data/app/mongodb/bin/mongod--configsvr-f/data/work/mongodb/conf/server.conf
3 units configured to start the service separately
/data/app/mongodb/bin/mongo--port 27020
Rs.initiate ({_id: "Configreplset", Configsvr:true,members: [{_id:0, Host: "10.33.100.117:27020"},{_id:1, Host: "10.33 .100.118:27020 "},{_id:2, Host:" 10.33.100.119:27020 "}]})
Four: 3 stations routed interface respectively
Cat/data/work/mongodb/conf/mongos.conf
Processmanagement:
Fork:true
Pidfilepath:/data/work/mongodb/mongos/mongos.pid
Net
port:27030
http
Enabled:false
Systemlog:
Destination:file
Path:/data/work/mongodb/mongos/mongos.log
Logappend:true
Sharding:
configdb:configreplset/10.33.100.117:27020,10.33.100.118:27020,10.33.100.119:27020
#配置服务的端口和地址.
Numactl--interleave=all/data/app/mongodb/bin/mongos-f/data/work/mongodb/conf/mongos.conf
At this point each machine guarantees 8 mongodb processes.
/data/app/mongodb/bin/mongo--port 27030
Add 6 shards in turn
Sh.addshard ("rs001/10.33.100.117:27011,10.33.100.118:27011,10.33.100.119:27011")
Test shards
Sh.enablesharding ("Test")
Sh.shardcollection ("Test. Log ", {id:1})
Use test
for (var i = 1; I <= 100000; i++) {
Db. Log.save ({id:i, "message": "Message" +i});
}
Rs.status ()
Db. Log.stats ()
Db. Log.drop ()
This article is from the "Learning to Eternity" blog, please make sure to keep this source http://hzcsky.blog.51cto.com/1560073/1913947
MONGDB Cluster 3.4 Shard mode