In the following example, two shared services, one Configuration Service and one mongos service are enabled. All these services are tested on the same server. Of course, they can also be deployed on different servers.
1. Create a Shards instance
Start a pair of shared services on the local machine
- $ Mkdir/data/db/a/data/db/B
- $./Mongod -- shardsvr -- dbpath/data/db/a -- port 10000>/tmp/sharda. log &
- $ Cat/tmp/sharda. log
- $./Mongod -- shardsvr -- dbpath/data/db/B -- port 10001>/tmp/shardb. log &
- $ Cat/tmp/shardb. log
2. Start configuration and mongos Service
Start the Configuration Service and the mongos service,
- $ Mkdir/data/db/config
- $./Mongod -- configsvr -- dbpath/data/db/config -- port 20000>/tmp/configdb. log &
- $ Cat/tmp/configdb. log
- $./Mongos -- configdb localhost: 20000>/tmp/mongos. log &
- $ Cat/tmp/mongos. log
Here, mongos does not need to specify a data directory. All the data required by mongos is obtained from the configuration server. Of course, the configuration server here can also be a cluster, which can improve availability.
Of course, the two shard and Configuration Services can be distributed on different machines, as long as the corresponding IP address is specified here. All services are deployed on the same server for testing.
NOTE: The default chunk size of mongos is 200 MB. You can set the chunk size when starting the mongos service, in MB, when two shard servers balance data, data is migrated in chunk units.
- $./Mongos -- configdb localhost: 20000 -- chunkSize 1>/tmp/mongos. log &
Set chunkSize to 1 MB.
3. Connect to the mongos Service
If you connect to the service on the local machine, you only need to run the./mongo command. The default connection port number is 27017. Of course, you can also connect to a remote server. here you need to specify the IP address of the machine.
Next we will configure the shard after connecting to mongos.
- $./Mongo
- MongoDB shell version: 1.6.0
- Connecting to: test
- > Use admin// The administrator must be used for configuration.
- Switched to db admin
- > Db. runCommand ({addshard:"Localhost: 10000"})// Add the shard Node
- {"Shardadded":"Shard0000","OK": 1}
- > Db. runCommand ({addshard:"Localhost: 10001"})
- {"Shardadded":"Shard0001","OK": 1}
Next, you need to specify the database to be segmented, and the table in the database must specify a corresponding key or multiple keys. This is used to split the data.
- > Db. runCommand ({enablesharding:"Test"})
- {"OK": 1}
- > Db. runCommand ({shardcollection:"Test. people", Key: {name: 1 }})
- {"OK": 1}
4. Check the database status.
- > Use config
- Switched to db config
- > Show collections
- Chunks
- Databases
- Lockpings
- Locks
- Mongos
- Settings
- Shards
- System. indexes
- Version
These tables contain all the information about the block configuration.
5. Relationship between chunks and replication.
Data Partitioning is equivalent to scaling out the database. the centrally managed data is managed on different spof to improve the scalability of the database. The replication mentioned above is prepared to improve data availability. It will not affect data availability because of a node failure.
The figure below shows that horizontal scaling (shared) and vertical replication form a data array.