MongoDB master-slave replication and the Replset configuration tutorial of the replica set _mongodb

Source: Internet
Author: User
Tags extend failover mongodb unique id

Copy
MongoDB replication is important, especially now that the storage engine does not support click Persistence. Not only can replication be used to deal with failover, data integration, but also read extensions, hot backups, or data sources as offline batches.

1. master-Slave copying
Master-slave replication is the most common way of MongoDB replication. Can be used for backup, failover, read extension, and more.
The basic is to build a master node and one or more from nodes, each from the node need to know the address of the master node. Run Mongod--master start the primary server. Run Mongod--slave--source master_address boot from the server.
[root@test02 ~]# mongod --fork --dbpath /data/node2 --logpath /data/mongodb.log --port 10001 --logappend --master
Select a different directory and port from the node and use--source to indicate the address of the master node from the node.
[root@test02 ~]# mongod --fork --dbpath /data/node3 --logpath /data/mongodb2.log --port 10002 --logappend --slave --source localhost:10001
all from the node is copied from the master node information, currently not from the node to the node replication mechanism, because from the node does not have their own oplog.
There is no explicit restriction from nodes in a cluster, but multiple nodes are unable to query a single point host, and clusters with no more than 12 nodes can run well.

1.1 Options
(1)--only
Specifies that only a specific database is replicated from the node (by default, all databases are replicated).
(2)--slavedelay
Used on a node to increase latency when the operation of the master node is applied. This can easily set the delay from the node, such a node for users inadvertently delete important data or insert garbage data to play a protective role. By delaying the operation, you can have a recovery time lag.
(3)--fastsync
Starts the from node based on the data snapshot of the master node. If the data directory starts with a data snapshot of the primary node, starting with this option from the node is a lot more than doing a full sync block.
(4)--autoresync
If the primary node and the node are not synchronized, you can automatically synchronize.
(5)--oplogsuze
The size of the primary node Oplog (in megabytes).

1.2 Adding and deleting sources

Cat >>/etc/hosts <<eof
192.168.27.212 test02
192.168.27.213 test03 192.168.27.214 test01
EOF

You can use--source to specify the master node when starting from the node, or you can configure the source in the shell.
[root@test02 ~]# mongod --fork --dbpath /data/node3 --logpath /data/mongodb.log --port 10003 --logappend --slave
inserts the 192.168.27.212:10001 as a source on the node.

> Db.sources.insert ({"Host": "192.168.27.212:10001"});

Query now gets the inserted document:

> Use local
switched to DB local
> Db.sources.find ();
{"_id": ObjectId ("530be5049ab1ad709cfe66b7"), "host": "Test02:10001" 

When synchronization is complete, document updates:

> Db.sources.find ();
{"_id": ObjectId ("530bf0ab058022d91574c79c"), "host": "test02:10001", "source": "Main", "Syncedto": Timestamp (1393291 443, 1), "Dbsnextpass": {"foo": True, "Test": true}}

2. Replica set
the replica set is the master-slave cluster with automatic failure recovery function. The most obvious difference between a master-slave cluster and a replica set is that the replica set does not have a fixed master node: The entire cluster elects a master node and changes to other nodes when it is not working. The replica set always has an active node and one or more backup nodes.
The best advantage of a replica set is its automation.

Mongod--fork--dbpath/data/node2--logpath/data/mongodb.log--port 10001--logappend--replSet myrepl/test03:10002< C10/>mongod--fork--dbpath/data/node3--logpath/data/mongodb.log--port 10002--logappend--replSet myrepl/test02 : 10001

The highlight of the replica set is the Self-Test feature: When a single server is specified, MongoDB automatically searches for and connects to the remaining nodes.
After starting several servers, the log will tell you that the replica set is not initialized. The replica set needs to be initialized in the shell.
Connect to any one server. The initialization command is executed only once:

> Db.runcommand ({"Replsetinitiate": {
...) "_id": "Myrepl",
... "Members": [
...] {
...  " _id ": 1,
...  " Host ":" Test02:10001 "...
}, ...}
{
...  " _id ": 2,
...  " Host ":" Test03:10002 "...
}
... ]}})
{
  "Startupstatus": 4,
  "info": "myrepl/test03:10002",
  "OK": 0,
  "errmsg": "All members and seeds mus T is reachable to initiate set "
}

(1) "_id": Name of "MYREPL" replica set
(2) "members": [...] A list of servers in a replica set with at least two keys per server.
(3) "_id": N Unique ID per server
(4) "host": hostname This key specifies the server host
Or:

Config = {"_id": "Myrepl", "members": [{"_id": 0, "host": "Test02:10001"}, {"_id": 1, "host": "Test03:10
002 "}]} rs.initiate (config);
Rs.status ();
Myrepl:secondary&gt; Rs.status (); {"Set": "Myrepl", "date": Isodate ("2014-02-25t02:17:39z"), "MyState": 2, "syncingto": "test03:10002", "memb ERs ": [{" _id ": 0," name ":" test02:10001 "," Health ": 1," state ": 2," Statestr ":" Secon 
     Dary "," uptime ": 968," Optime ": Timestamp (1393294457, 1)," Optimedate ": Isodate (" 2014-02-25t02:14:17z "), 
     "ErrMsg": "Syncing to:test03:10002", "Self": true}, {"_id": 1, "name": "Test03:10002", "Health": 1, "state": 1, "Statestr": "PRIMARY", "uptime":, "Optime": Timestamp (139329445 7, 1), "Optimedate": Isodate ("2014-02-25t02:14:17z"), "Lastheartbeat": Isodate ("2014-02-25t02:17:38z"), " Lastheartbeatrecv ": Isodate (" 2014-02-25t02:17:39z ")," Pingms " : 1, "syncingto": "test02:10001"}], "OK": 1}
 

If you stop the primary node and write at the secondary node, the following error message occurs:

Myrepl:secondary> Db.test.insert ({name: "Baobao"});
Not master

If there are only 2 MongoDB, it is not safe to configure the replication cluster, and 1 external roles are required to adjust the roles of each node.
(1) Standard: a regular node that stores a complete copy of the data and participates in an election vote, possibly called an active node.
(2) Passive: Store The complete copy of the data, participate in voting, can not become an active node.
(3) Arbiter: The arbitrator is only responsible for voting, does not accept copying data, and cannot become an active node.
When the primary is down, a primary node can be elected through the arbiter in Secodarys to avoid a single point of failure.
You can add a quorum node, only for arbitration, not for data storage.

Mongod--fork--dbpath/data/node1--logpath/data/mongodb.log--port 10003--logappend--replSet myrepl/test02:10001, test03:10002
myrepl:primary> rs.addarb ("test01:10003");
{"OK": 1}

To view the status of each node:

Myrepl:primary&gt; Rs.status ();
     {"Set": "Myrepl", "date": Isodate ("2014-02-25t02:30:26z"), "MyState": 1, "members": [{"_id": 0,
     "Name": "test02:10001", "Health": 1, "state": 1, "Statestr": "PRIMARY", "uptime": 1735,
     "Optime": Timestamp (1393295409, 1), "Optimedate": Isodate ("2014-02-25t02:30:09z"), "Self": true}, { "_id": 1, "name": "test03:10002", "Health": 1, "state": 2, "statestr": "Secondary", "up Time ": 204," Optime ": Timestamp (1393295409, 1)," Optimedate ": Isodate (" 2014-02-25t02:30:09z ")," lasthear
     Tbeat ": Isodate (" 2014-02-25t02:30:26z ")," Lastheartbeatrecv ": Isodate (" 2014-02-25t02:30:24z ")," PingMs ": 1, "Syncingto": "test02:10001"}, {"_id": 2, "name": "test01:10003", "Health": 1, "state
 ": 6," Statestr ":" UNKNOWN "," Uptime ":" Lastheartbeat ": Isodate (" 2014-02-25t02:30:25z "),    "Lastheartbeatrecv": Isodate ("1970-01-01t00:00:00z"), "Pingms": 1, "lastheartbeatmessage": "Still initial
 Izing "}]," OK ": 1}

Compares three nodes ' judgments of their own node properties:

Myrepl:primary&gt; Db.ismaster (); 
  {"SetName": "Myrepl", "IsMaster": True, "secondary": false, "hosts": ["test03:10002", "test02:10001" ], "arbiters": ["test01:10003"], "PRIMARY": "test03:10002", "Me": "test03:10002", "maxbsonobjectsize ": 16777216," maxmessagesizebytes ": 48000000," localtime ": Isodate (" 2014-02-25t02:32:29.760z ")," OK ": 1} myrep
L:secondary&gt; Db.ismaster (); 
  {"SetName": "Myrepl", "IsMaster": false, "secondary": True, "hosts": ["test02:10001", "test03:10002" ], "arbiters": ["test01:10003"], "PRIMARY": "test03:10002", "Me": "test02:10001", "maxbsonobjectsize ": 16777216," maxmessagesizebytes ": 48000000," localtime ": Isodate (" 2014-02-25t02:33:50.144z ")," OK ": 1} myrep
L:secondary&gt; Db.ismaster (); 
  {"SetName": "Myrepl", "IsMaster": false, "secondary": True, "hosts": ["test02:10001", "test03:10002"
  ], "arbiters": ["test01:10003"],"PRIMARY": "test03:10002", "Me": "test02:10001", "maxbsonobjectsize": 16777216, "maxmessagesizebytes": 48000000,

 "LocalTime": Isodate ("2014-02-25t02:33:50.144z"), "OK": 1}

Modify the priority key in the node configuration to configure it as a standard node or a passive node.
The default priority is 1, which can be 0~1000.
The "Arbiteronly" key allows you to specify the quorum node.
The backup node extracts oplog from the active node and performs operations as if it were a backup server in the active backup system. The active node also writes to its local oplog, which makes it an active node. The operation in Oplog also includes a strictly incremented ordinal number. The timeliness of data is judged by serial number.
2.1 Failover and active node elections
If the active node is broken, the other node chooses a new active node. The new active node is elected by the majority of the replica set. The quorum node is responsible for voting only to avoid deadlock. The new node is the highest-priority node.
The active node uses a heartbeat to track how many nodes in the cluster are visible to it, and if not more than half, the active node is automatically reduced to a backup node. You can prevent active nodes from being decentralized.
Regardless of when the active node changes, the data for the new active node is assumed to be the most recent data for the system. The operations of the other nodes are rolled back, and all nodes are synchronized after they have joined the new active node. These nodes look at their oplog, find out what activity the active node has not done, and then ask the active node for the latest copy of the document that the action affects.
A node that is performing a resynchronization is considered a recovery and cannot be an active node candidate until this process is completed.
2.2 about the configuration of the replica set Replset we will talk about it in detail later.

3. Performing actions from the server
the primary role of the node is as a failure recovery mechanism to prevent primary node data loss or stop the service.
A data source that can be backed up from a node. It can also be used to extend read performance, or to perform data processing.
3.1 Read Extensions
one way to read with MongoDB is to place the query on the node and reduce the load on the primary node. This is pretty good when the load is read-intensive. When it is write-intensive, it needs to be expanded with automatic fragmentation.
The main point of using a mongodb to extend the read from a node is that data replication is not synchronized, that is, inserting or updating the data port at the primary node, and a moment from the node's data is not up-to-date.
Extended reads need to open a special option Slaveokey to tell if the request can be processed from the server.
If you are working directly on secondary, the following error occurs:

Myrepl:secondary> Db.test.find ();
Error: {"$err": "Not Master and Slaveok=false", "Code": 13435}

You need to tell the MongoDB cluster from which machine to read:

Myrepl:secondary> Rs.slaveok ();
Myrepl:secondary> Db.test.find ();
{"_id": ObjectId ("530bfc79eee2c2ce39f9cd95"), "name": "Caoqing"}
{"_id": ObjectId ("530bfd8f3627cb16c15dcb32"), "name": "Xiaobao"}

3.2 Data processing from node
Another service from the node is as a mechanism to mitigate the load of intensive processing, or as an aggregation, to avoid affecting the performance of the primary node. Use--master to start a common from node, while using--master and--slave contradictions. This means that if you are able to write from a node and query as usual, you can use it as a master node. Data is replicated from the node or from the master node. This allows blocking operations from the node to be performed without affecting the performance of the primary node.
The database cannot be replicated the first time the node is started, and if so, the database cannot complete synchronization and can only be updated.
This technique is used to ensure that writes cannot be performed on a database that is replicating the master node data from the node. These operations cannot be restored from the node, and the master node cannot be mapped correctly.

4. Working principle
MongoDB replication requires at least two servers or nodes, one of which is responsible for processing client requests, and the other is from the node, which is responsible for mapping the data of the master node. The master node records all operations on it.
The primary node is periodically polled from the node to obtain these operations, and then perform these operations on the data copy. Because the same operation is performed with the primary node, the data from the node can be synchronized with the primary node.
4.1 oplog
the operation record of the master node becomes Polog (Operation log). Oplog is stored in a special database and becomes local. Oplog is in the Oplog $main collection. Each document in Oplog represents an operation performed on behalf of the primary node.

Myrepl:primary> db.oplog. $main. Help ();

To view the contents of Oplog:

 myrepl:primary> use local, switched to DB local myrepl:primary> show collections; me
Oplog.rs replset.minvalid Slaves startup_log system.indexes system.replset myrepl:primary> db.oplog.rs.find ();  {"TS": Timestamp (1393294283, 1), "H": Numberlong (0), "V": 2, "OP": "N", "ns": "", "O": {"MSG": "Initiating Set"} } {"TS": Timestamp (1393294457, 1), "H": Numberlong (" -8949844291534979055"), "V": 2, "OP": "I", "ns": "Test.test", " O ": {" _id ": ObjectId (" 530bfc79eee2c2ce39f9cd95 ")," name ":" Caoqing "} {" TS ": Timestamp (1393294735, 1)," H ": Numbe Rlong ("677282438107403253"), "V": 2, "OP": "I", "ns": "Test.test", "O": {"_id": ObjectId ("530bfd8f3627cb16c15dcb32") , "name": "Xiaobao"}} {"TS": Timestamp (1393295409, 1), "H": Numberlong ("5171944912929102944"), "V": 2, "OP": "N", ' NS ': ', ' o ': {' msg ': ' Reconfig set ', ' Version ': 2}} myrepl:primary> 

The

document contains the following keys:
(1) The timestamp of the TS operation. A timestamp is an internal type that is used to track when an operation is performed. Consists of a 4-byte timestamp and a 4-byte increment counter.
(2) OP operation type, only 1 byte code.
(3) the namespace in which the NS performs operations. The
(4) o  further specifies the document to perform the action on. The
Oplog records only actions that alter the state of the database. Oplog is only a mechanism for keeping data synchronized from nodes and master nodes.
The operation stored in Oplog is not exactly the same as the operation of the primary node. These operations are preceded by a idempotent transformation, which can be performed multiple times from the server, so long as the order is right, there is no problem. The
Oplog in a fixed set does not guarantee that the Oplog does not exceed the preset size. You need to specify the--oplogsize when creating the MongoDB service, and the parameter specifies the size of the oplog.
General 64bit-linux, allocating 5% of the remaining space, in megabytes.
4.2 Synchronization
initiates a full synchronization of the primary node data from the first time the node is started. Replicating each of the data on the master node from the node consumes a large resource. After the synchronization completes, query the oplog of the master node from the node and perform these operations to ensure that the data is up to date.
If the operation from the node is too far from the primary node, this happens when the node is not synchronized, when the node is down or when it is tired of reading, and this happens when the full sync is done, because the oplog may have rolled back a circle. After the
is synchronized from the node, replication stops, and the node needs to redo the full synchronization. You can use the {resync: 1} command to perform a synchronization manually, or you can use the--autoresync option to synchronize it automatically when starting from the node. Resynchronization is costly and avoided as much as possible by configuring large enough oplog. The
4.3 replication state and Local database
Local database is used to hold all internal replication states, both the master node and the from node. Native data is local and its contents are not replicated. You can ensure that a box of MongoDB databases has only one local database. The
Local database is not limited to storing internal state of MongoDB. If you have a document that you don't want to copy, you can also place it in a collection of local databases. The replication state on the
Master node also includes a list from the node. This list is stored in the slaves collection:

Myrepl:primary> Db.slaves.find ();
{"_id": ObjectId ("530bfbdc911eb0ac3bf2aa8b"), "config": {"_id": 1, "host": "Test03:10002"}, "ns": "Local.oplog.rs" , "Syncedto": Timestamp (1393295409, 1)}

The state is also stored from the node in the local database. Holds a unique identifier from a node in the Me collection, storing a list of sources or nodes in the sources collection.

Myrepl:secondary> Db.me.find ();
{"_id": ObjectId ("530bfbdc911eb0ac3bf2aa8b"), "host": "Test03"}

Both the master and the node trace the update status from the node, which is done by a timestamp stored in "Syncedto".
4.4 Blocking replication
developers can use the Getlastrror ' W ' parameter to ensure data synchronization. Running GetLastError will go into a blocking state until N servers replicate the most recent write operation.
Check for errors in the last database operation for this connection.

Myrepl:primary> Db.runcommand ("GetLastError")
{
  "n": 0,
  "lastop": Timestamp (0, 0),
  " ConnectionID ": 3525,
  err": null,
  "OK": 1
}

When you specify the "w" option, you can use the "wtimeout" option to indicate a time-out in milliseconds.
Blocking replication causes the write operation to become significantly slower, especially when the value of "w" is larger.

5. Management
5.1 Management
MongoDB contains a number of useful administrative tools to view the status of replication.
Pass

Db.printreplicationinfo ()

command to view the Oplog status.

Myrepl:primary> db.printreplicationinfo ();
Configured Oplog size:997.7892578125001mb
log length start to End:1126secs (0.31hrs)
Oplog Feb 2014 10:11:23 gmt+0800 (CST)
Oplog last event Time:tue Feb 2014 10:30:09 (CST) now
:      gmt+0800 F EB-2014 02:07:23 gmt+0800 (CST)

The output information includes the size of the Oplog log, and the start time of the Operation log record.
View the sync status from the library.

Myrepl:primary> db.printslavereplicationinfo ();
SOURCE:TEST03:10002
   syncedto:tue Feb 2014 10:30:09 gmt+0800 (CST)
     = 56533 secs ago (15.7hrs)
source:t est01:10003
   No replication info, yet. State:arbiter

The output information includes the host name, port information, etc. from the library.
5.2 Change the size of the Oplog
If you find the Oplog size is not appropriate, the easiest way is to stop the master node, delete the local database files, the intention of the setting reboot.

# rm-rf/data/node2/local*

Pre-allocating space for large oplog is time-consuming and can lead to increased downtime for primary nodes, as well as manual data files as manually as possible.
5.3 Authentication Issues for replication
If authentication is used in replication, some configuration is required to allow access to the data of the Russian master node from the node. Both the primary node and the from node need to add users to the local database, and each node has the same username and password.
Connecting from a node to the master node is the authentication of the user stored in the local.system.users. First try the "Repl" user, and if not, use the first available user in Local.system.users.

PS: Replica set replset specific configuration
1.MongoDB: Create Replica Set
1.1 Start two MongoDB databases separately

Mongod--fork--dbpath/data/node2--logpath/data/mongodb.log--port 10001--logappend--replSet myrepl/test03:10002
   mongod--fork--dbpath/data/node3--logpath/data/mongodb.log--port 10002--logappend--replSet myrepl/test02:10001

1.2 Initializing the replica set

Config = {"_id": "Myrepl", "members": [{"_id": 0, "host": "Test02:10001"}, {"_id": 1, "host": "Test03:10
002 "}]} rs.initiate (config);
Rs.status ();
Myrepl:secondary&gt; Rs.status (); {"Set": "Myrepl", "date": Isodate ("2014-02-25t02:17:39z"), "MyState": 2, "syncingto": "test03:10002", "memb ERs ": [{" _id ": 0," name ":" test02:10001 "," Health ": 1," state ": 2," Statestr ":" Secon 
     Dary "," uptime ": 968," Optime ": Timestamp (1393294457, 1)," Optimedate ": Isodate (" 2014-02-25t02:14:17z "), 
     "ErrMsg": "Syncing to:test03:10002", "Self": true}, {"_id": 1, "name": "Test03:10002", "Health": 1, "state": 1, "Statestr": "PRIMARY", "uptime":, "Optime": Timestamp (139329445 7, 1), "Optimedate": Isodate ("2014-02-25t02:14:17z"), "Lastheartbeat": Isodate ("2014-02-25t02:17:38z"), " Lastheartbeatrecv ": Isodate (" 2014-02-25t02:17:39z ")," Pingms " : 1, "syncingto": "test02:10001"}], "OK": 1}
 

1.3 Add an arbitration node, only for arbitration, not for data storage.

Mongod--fork--dbpath/data/node1--logpath/data/mongodb.log--port 10003--logappend--replSet myrepl/test02:10001, test03:10002
myrepl:primary> rs.addarb ("test01:10003");
{"OK": 1}

2 Mongodb:replica Set Add node
2.1 Existing Environment:

Myrepl:primary> rs.conf ();
{
  "_id": "Myrepl", "
  Version": 2,
  "members": [
    {
      "_id": 0,
      "host": "Test02:10001"
    } ,
    {
      "_id": 1,
      "host": "Test03:10002"
    },
    {
      "_id": 2,
      "host": "Test01:10003",
      ' arbiteronly ': True
    }
  ]
}

Existing three nodes, two standard nodes, and one arbiter node.
2.2 Add Nodes
2.2.1 Create data directories and log files:

Mkdir-p/data/node/
Touch/data/mongodb.log

2.2.2 Installation MongoDB:

Tar zxf mongodb-linux-x86_64-2.4.9.tgz 
mv Mongodb-linux-x86_64-2.4.9/opt/mongodb
echo "Export path= $PATH:/ Opt/mongodb/bin ">>/etc/profile
source/etc/profile
mongod--config ~/.mongodb.conf

2.2.3 Create a new from node profile:

Cat >> ~/.mongodb.conf <<eof
fork = ture
port = 10005 DBPath
=/data/node LogPath
=/data/ Mongodb.log
Logappend = True
replset = Myrepl
EOF

2.2.4 Change node information
Cat/etc/sysconfig/network
cat >>/etc/hosts << EOF 
192.168.27.214 test01 
192.168.27.212 test02 
192.168.27.213 test03
192.168.27.215 test04 192.168.27.216
EOF

2.2.5 determine if the node is the primary node
Myrepl:primary> Rs.ismaster ();
{
  "SetName": "Myrepl", "
  IsMaster": True,
  "secondary": false,
  "hosts": [
    "test02:10001",
    "" Test03:10002 "
  ],
  " arbiters ": [
    " test01:10003 "
  ],
  " PRIMARY ":" test02:10001 ",
  " Me ":" test02:10001 ",
  " maxbsonobjectsize ": 16777216,
  " maxmessagesizebytes ": 48000000,
  " localtime ": Isodate ("2014-02-25t19:23:22.286z"),
  "OK": 1
}

2.2.6 add New from node to Replset

Myrepl:primary> Rs.add ("192.168.27.215:10004");
# The method of adding arbiter nodes:myrepl:primary> rs.addarb ("test01:10003");

2.2.7 View replica set status information again

Myrepl:primary> rs.conf ();
{
  "_id": "Myrepl", "
  Version": 3,
  "members": [
    {
      "_id": 0,
      "host": "Test02:10001"
    },
    {
      "_id": 1,
      "host": "Test03:10002"
    },
    {
      "_id": 2,
      "host": "Test01:10003",
      ' Arbiteronly ': true
    },
    {
      "_id": 3,
      "host": "192.168.27.215:10004"
    }
  ]
}

3. Test
3.1 The primary node inserts data
myrepl:primary> Db.test.insert ({"Name": "Xiaohuan", "Age": 30});
3.2 Querying data from a node
myrepl:secondary> Rs.slaveok ();
Myrepl:secondary> Db.test.find ();
{"_id": ObjectId ("530bfc79eee2c2ce39f9cd95"), "name": "Caoqing"}
{"_id": ObjectId ("530bfd8f3627cb16c15dcb32"), "name": "Xiaobao"}
{"_id": ObjectId ("530ceed64770e9f00a279900"), "name": "Xiaohuan", "Age": 30}

4. Turn the standard node into a passive node

myrepl:primary> cfg = rs.conf ()
{
  "_id": "Myrepl",
  "version": 3,
  "members": [
    {
      "_id": 0,< c7/> "host": "Test02:10001"
    },
    {
      "_id": 1,
      "host": "Test03:10002"
    },
    {
      "_id": 2,< c15/> "host": "Test01:10003",
      "arbiteronly": True
    },
    {
      "_id": 3,
      "host": " 192.168.27.215:10004 "
    }
  ]
}
myrepl:primary> cfg.members[3].priority = 0;

Myrepl:primary> rs.reconfig (CFG);
Myrepl:primary> rs.conf ();
{"
  _id": "Myrepl", "
  Version": 4,
  "members": [
    {
      "_id": 0,
      "host": "Test02:10001"
    },
    {
      "_id": 1,
      "host": "Test03:10002"
    },
    {
      "_id": 2,
      "host": "Test01 : 10003 ",
      arbiteronly": True
    },
    {
      "_id": 3,
      "host": "192.168.27.215:10004",
      " Priority ': 0
    }
  ]
}

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.