a), shards include:1. At least 3 config servers: stores the data block mapping of shards to determine the data stored on that slice. 2, Distribution router: More than 1 MONGOs, the user's read, write requests distributed to the Shard, the server is accessed by the application, the application does not directly access the Shard. 3, 2 or more shards, each shard is a separate mongod or replica set, if it is a development, test environment, shards can make independent mongod and not necessarily a replica set.
second), the principle of sharding read and write: Application access MONGOs (also known as query routing), MONGOs queries the config server (also known as the configuration server) about which shard the data should belong to, and then shards (shards) reads and writes data to the specific shard. Each shard is composed of a replica set. Appdriver | Mongos\ (can be multiple MONGOs)/\ \config/config/config/\ Shard (Repls ET) shard (Replset)
III), Production environment deployment considerations:1, 3 config servers, need to be deployed on different machines. 2, 2 or more copies of the set constitute a shard. 3, one or more mongos, deployed on the server where the application resides.
IV), deployment:4.1 IP Planning Environment: 1, at least 3 config servers (note that the Config requires an odd number of): 192.168.62.152:27052 192.168.62.154:27054 192.168.62.155:27055 2, at least 1 MONGOs servers, the commercial environment is recommended to configure more than one: 192.168.62.153:27788 192.168.62.155:27799 3, no less than 2 replica set shards: Shard 1:192.168.62.153:
17053192.168.62.154:
17054192.168.62.155:
17055Shard 2:192.168.62.155:
17155192.168.62.153:
17153192.168.62.152:
17152
"Overall Deployment step description"1. Deploy 2 shard Replica sets first, no need to configure Super Administrator and database administrator, do not create any database. 2, Step 3 mongoconfig service, no need to configure Super Administrator and database administrator. 3, deploy 2 MONGOs routing service, 4, use the./mongo--port 27788 Local Exception logon method on the MONGOs server to create a super administrator. 5. Increase the Mongoconfig configuration server on the MONGOs server. 6. Add the database administrator on the MONGOs server. 7. Create an index on the MONGOs server for the database (shard required) 6, add 2 shards and tablet keyboards on the MONGOs server.
"Important"All servers (config/mongs/shard replica sets) have Mongodb-keyfile encryption strings, in order to communicate between cluster servers ijmyg3al15ek0fwibibhvar9ok/ D0deqxtkcnoyaqyz2woin/icxlljszpx0f+6amm
4.2, the Deployment Configuration server (Mongo Config: storage cluster meta-information Mongod instances, the build environment at least 3 servers configured, running on different servers) on the 155/154/152 server, repeat the following 2 steps. The port can be different. The installation directory is in the/soft/configmongodb directory.
1), on the configuration server: Mkdir-p/data/configdb 2), start the configuration server
(no need to establish Administrator account and password) If you do not add the MONGDB command to an environment variable, perform: 1 in the bin directory of the installation package 55 execution: cd /soft/configMongoDB/bin & nbsp ./mongod--configsvr--dbpath /soft/configmongodb/db --port 27055-- keyfile=/soft/configmongodb/mongodb-keyfile& 154 implementation: & nbsp cd /soft/configMongoDB/bin & nbsp ./mongod--configsvr--dbpath /soft/configmongodb/db --port 27054--keyFile=/soft/ configmongodb/mongodb-keyfile& 152 implementation: & nbsp cd /soft/configmongodb/bin ./mongod-- Configsvr--dbpath /soft/configmonGodb/db --port 27052--keyfile=/soft/configmongodb/mongodb-keyfile&
4.3. Deploy the MONGOs service to route client operations to specific shards, at least not less than one MONGOs service. It should be noted that all MONGOs configuration Server order (192.168.62.152:27052,192.168.62.154:27054,192.168.62.155:27055) must be consistent, otherwise it will be an error. MONGOs because there is no data node, when creating an administrator, use the./mongo--port 27799 to execute on this computer. MONGOs's super account and password are consistent with the replica set, and the following commands are executed after creating the superuser.
Performed on 153: Cd/soft/mongosmongo/bin./mongos--configdb 192.168.62.152:27052,192.168.62.154:27054,192.168.62.155:2705 5--port 27788--keyfile=/soft/mongosmongo/mongodb-keyfile& performed on 155: Cd/soft/mongosmongo/bin./mong Os--configdb 192.168.62.152:27052,192.168.62.154:27054,192.168.62.155:27055--port 27799--keyFile=/soft/ mongosmongo/mongodb-keyfile&
4.4, add a shard to the cluster (a shard can be a separate Monod service, or a replica set, in a production environment should use a replica set, read-write separation, failure failover) This case uses 2 replica sets to implement Sharding.
copy set deployment, please refer to the above, note that if the entire environment is deployed on 1-2 machines, the data directory for each Mogod instance should not be in the same place.
in a replica set, you only need to build a super Administrator on the master machine (this step can also be omitted), do not need to establish a specific database management, wait until the Shard is successfully joined ,
establish a specific administrator and index on the MONGOs server.
4.4.1 Start shard replica set 1, shard name:
Shard1Performed on 153, 154, 155: Cd/soft/shardmongo/bin./mongod-
shardsvr --auth--replset
Shard1--config/soft/shardmongo/mongodb.conf perform replica set config={"_id" on Master machine: "Shard1", "ver Sion ": 1," members ": [{" _id ": 1," host ":" 192.168.62.153:17053 "},{" _id ": 2," host ":" 192.168.62.154:17054 "},{" _id ": 3," Host ":" 192.168.62.155:17055 "}"} rs.initiate (config) 4.4.2 start shard replica set 2, shard name:
Shard2Execute Cd/soft/shard2mongo/bin on 155/153/152./mongod-
Shardsvr--auth--replset
Shard2--config/soft/shard2mongo/mongodb.conf perform replica set config={"_id" on Master machine: "Shard2", "Version": 1, "Membe RS ": [{" _id ": 1," host ":" 192.168.62.155:17155 "},{" _id ": 2," host ":" 192.168.62.153:17153 "},{" _id ": 3," host ":" 192.168.62.152:17152 "}]} rs.initiate (config)
4.5, add shards to MONGOs. Choose to select one of the MONGOs, and then execute the command after the connection./mongo 192.168.62.153:27788 Use admin db.auth ("Superadmin", "admin123") 4.5.1, add Shard 1:sh.addshard ("shard1/192.168.62.153:17053") format: Shard name/At least one Mongod instance in the Shard. Add successful display: {"shardadded": "Shard1", "OK": 1} PS: Directly execute Sh.addshard ("192.168.62.153:17053") if you simply add a standalone MONGD instance 4.5.2, add Shard 2:sh.addshard ("shard2/192.168.62.155:17155") if prompted with error: "Can ' t add Shard shard2/192.168. 62.155:17,155
because a local database ' HEZX ' exists in anothershard1:shard1/192.168.62.153:17053,192.168.62.154:17054,192.168.62.155:17055 then you need to remove Hez from the master of Shard1 and Shard2 at this time x database, and then after the Shard has been added successfully, create the database on the MONGOs server.
On another MONGOs server is also operated with the above command. 4.5. Open Shard sh.enablesharding ("Hezx") to a database
4.6, create the chip key (the chip key needs to be indexed, if there is data in the collection, you need to manually knock the CREATE INDEX command, if not indexed, created automatically during the process) because the chip key requirements are indexed, and if the composite index must be the beginning of the index. If there is no data in the collection, the index is automatically created. Sh.shardcollection ("Hezx.message", {"to": 1}) Description: Creates a to-key slice for the message collection of the HEZX library, which can be a composite key.
At this point, the MongoDB Shard has been deployed. The relevant commands for the Shard are as follows: 1, Sh.status (), view the state of the Shard. What shards, which databases are fragmented, and what the slice keys are. 2, Db.message.stats (), select a table, execute the command, you can see the table of the Shard, how much, how much data.
MongoDB3.0.1 fragmented deployment, 3.0 of domestic deployment documents are not many (He Zhixiong).