MongoDB3.0.1 is deployed in multiple parts. There are not many deployment documents (HE zhixiong) in 3.0 of China ).
1) The parts include:1. There are at least three config servers: The data block-to-shard ing is stored to determine where the data is stored. 2. Distribution router: more than one mongos distributes User Read and Write requests to shards. The server is accessed by the application and the application does not directly access the shards. 3. There are more than two shards. Each Shard is a separate mongod or replica set. If it is a development or test environment, the shard can make an independent mongod instead of necessarily a replica set.
Ii) Principle of multipart read/write: When an application accesses mongos (also called a query route), mongos queries the shard of data from the config server (also called the configuration server) and then shards) read and Write data to a specific shard. Each Shard is composed of a replica set. AppDriver | mongos \ (multiple mongos can be used) // \ config/\ shard (replset)3) Production Environment deployment considerations:1. Three config servers must be deployed on different machines. 2. Two or more replica sets form parts. 3. One or more mongos are deployed on the server where the application is located.
4) deployment:4.1 IP planning environment: 1. At least three Config servers (an odd number of Config servers required): 192.168.62.152: 27052 192.168.62.154: 27054 192.168.62.155: 27055 2. At least one Mongos Server, for commercial use, we recommend that you configure multiple replica sets: 192.168.62.153: 27788 192.168.62.155: 27799 3. No less than two replica set shards: Part 1: 192.168.62.153:17053192.168.62.154:17054192.168.62.155:17055Part 2: 192.168.62.155:17155192.168.62.153:17153192.168.62.152:17152
[General Deployment procedure description]1. Deploy two shard replica sets first. You do not need to configure the super administrator or database administrator. do not create any databases. 2. Step 3: configure the strongconfig service without configuring the super administrator and database administrator. 3. Deploy two Mongos routing services. 4. Use the./mongo -- port 27788 local exception logon method on the mongos server to create a super administrator. 5. Add the mongoConfig configuration server to the mongos server. 6. Add a database administrator to the mongos server. 7. Create an index (sharding required) for the database on the mongos Server 6. Add two shards and one keyboard on the mongos server.
[Important]All mongodb-keyfile encrypted strings of the server (config/mongs/shard replica set) are used to forward/D0DEqxtkCNoyaQyz2wOIN/IcXLLjsZPX0F + 6AMM
4.2. Deploy the configuration server (Mongo Config: The MongoDB instance that stores the cluster metadata, generates at least three servers in the environment, and runs on different servers) on the 155/154/152 server, repeat the following two steps. Ports can be different. The installation directory is in the/soft/configMongoDB directory.
1) on the configuration server: mkdir-p/data/configdb 2), start the configuration server(No administrator account or password is required)If you do not add the mongdb command to the environment variable, go to the bin directory of the installation package and execute: cd/soft/configMongoDB/bin in 155. /mongod -- configsvr -- dbpath/soft/configMongoDB/db -- port 27055 -- keyFile =/soft/configMongoDB/mongodb-keyfile & execute cd/soft/configMongoDB/bin in 154. /mongod -- configsvr -- dbpath/soft/configMongoDB/db -- port 27054 -- keyFile =/soft/configMongoDB/mongodb-keyfile & execute cd/soft/configMongoDB/bin in 152. /mongod -- configsvr -- dbpath/soft/configMongoDB/db -- port 27052 -- keyFile =/soft/configMongoDB/mongodb-keyfile &
4.3 deploy the Mongos service to route client operations to specific shards. At least one Mongos service is deployed. note that the configuration sequence of all mongos servers (192.168.62.152: 27052,192.168 .62.154: 27054,192.168 .62.155: 27055) must be consistent. Otherwise, an error is returned. Because Mongos does not have data nodes, you can use./mongo -- port 27799 to execute it on the local machine when creating an administrator. The super account and password of Mongos are consistent with those of the replica set. After creating a Super User, run the following command.
Run cd/soft/javassmongo/bin on 153. /mongos -- configdb 192.168.62.152: 27052,192.168 .62.154: 27054,192.168 .62.155: 27055 -- port 27788 -- keyFile =/soft/mongosMongo/mongodb-keyfile & execute on 155: cd/soft/unzip smongo/bin. /mongos -- configdb 192.168.62.152: 27052,192.168 .62.154: 27054,192.168 .62.155: 27055 -- port 27799 -- keyFile =/soft/mongosMongo/mongodb-keyfile &
4.4 Add shards to the cluster (a shard can be a separate monod service or a replica set. In the production environment, use replica sets for read/write splitting and Failover) in this case, two replica sets are used to implement sharding.For details about how to deploy a replica set, see the preceding section. Note that if you deploy the entire environment on one or two machines, the data directory of each Mogod instance should not be in the same place.In a replica set, you only need to create a super administrator on the master machine (this step can also be omitted). You do not need to create a specific database for management. After the Shard is successfully added,Create a specific administrator and index on the mongos server.
4.4.1 start shard replica set 1, shard Name:Shard1Run cd/soft/shardMongo/bin./mongod-on 153, 154, and 155-Shardsvr-- Auth -- replSetShard1-- Config/soft/shardMongo/mongodb. conf runs the replica set config = {"_ id": "shard1", "version": 1, "members": [{"_ id": 1, "host": "192.168.62.153: 17053" },{ "_ id": 2, "host": "192.168.62.154: 17054" },{ "_ id": 3, "host": "192.168.62.155: 17055"}]} rs. initiate (config) 4.4.2 start shard replica set 2, shard Name:Shard2Run cd/soft/shard2Mongo/bin./mongod-on 155/153/152-Shardsvr-- Auth -- replSetShard2-- Config/soft/shard2Mongo/mongodb. conf runs the replica set config = {"_ id": "shard2", "version": 1, "members": [{"_ id": 1, "host": "192.168.62.155: 17155" },{ "_ id": 2, "host": "192.168.62.153: 17153" },{ "_ id": 3, "host": "192.168.62.152: 17152"}]} rs. initiate (config)
4.5 Add shards to Mongos. Select one of the Mongos instances and run the command after the connection. /mongo 192.168.62.153: 27788 use admin db. auth ("superAdmin", "admin123") 4.5.1. Add Part 1: sh. addShard ("shard1/192.168.62.153: 17053") Format: shard name/at least one Mongod instance in the shard. After successful addition, {"shardAdded": "shard1", "OK": 1} ps: If you only add a single-host mongd instance, run sh directly. addShard ("192.168.62.153: 17053") 4.5.2. Add Part 2: sh. addShard ("shard2/192.168.62.155: 17155"): "can't add shard shard2/192.168.62.155: 17155Because a local database 'hezx' exists in anotherShard1: shard1/192.168.62.153: 17053,192.168 .62.154: 17054,192.168 .62.155: 17055 then you need to delete the hezx database on the master nodes of shard1 and shard2, create a database on the mongos server.
On the other mongos server, perform operations with the preceding command. 4.5 enableSharding ("hezx") for a database ")
4.6 create a partition key (the partition key must be an index. If there is data in the collection, you need to manually create an index. If no index is created, it will be automatically created during the creation process, the partition key must be an index, and a composite index must start with the index. If no data exists in the collection, an index is automatically created. Sh. shardCollection ("hezx. message", {"to": 1}) Description: Creates to as a key chip for the message set of the hezx database, which can be a key combination.
So far, MongoDB has been deployed. Command: 1. sh. status () to view the shard status. Which shards are there, which databases are sharded, and what are the partition keys. 2. db. message. stats (), select a table, and execute this command. You can see the partition status of the table, the occupied size, and the data size.