Three nodes with one node hanging out will not affect the application client's read and write to the entire replica set!
[Java]View Plaincopy
- Public class Testmongodbreplset { public static void Main (string[] args)
- { try {list<serveraddress> addresses = new arraylist<serveraddress> ();
- ServerAddress Address1 = new ServerAddress ("192.168.1.136", 27017); ServerAddress
- Address2 = New ServerAddress ("192.168.1.137", 27017); ServerAddress ADDRESS3
- = New ServerAddress ("192.168.1.138", 27017); Addresses.add (ADDRESS1);
- Addresses.add (ADDRESS2); Addresses.add (ADDRESS3); Mongoclient client =
- new Mongoclient (addresses); DB db = Client.getdb ( "test"); Dbcollection
- coll = db.getcollection ( "TestDB"); //Insert Basicdbobject object = new Basicdbobject ();
- Object.append ( "Test2", "Testval2"); Coll.insert (object); Dbcursor dbcursor
- = Coll.find (); while (Dbcursor.hasnext ()) {DBObject dbobject = Dbcursor.next ();
- System. Out.println (Dbobject.tostring ()); }} catch (Exception e) {e.printstacktrace ();
- } } }
It seems to support the perfect failover, is this architecture more perfect? In fact, there are many places to optimize, such as the beginning of the second question: master node read and write pressure is too big how to solve? The common solution is read-write separation, how does the read-write separation of MongoDB replica set do?
Look at the words:
The normal write operation is not much read, so one master node is responsible for writing, and two replica nodes are responsible for reading.
1. Setting the read/write separation requires the Setslaveok to be set at the replica node secondary.
2, set the Copy node in the program is responsible for the read operation, the following code:
[Java]View Plaincopy
- Public class Testmongodbreplsetreadsplit {
- public static void Main (string[] args) {
- try {
- list<serveraddress> addresses = new arraylist<serveraddress> ();
- ServerAddress Address1 = new ServerAddress ("192.168.1.136", 27017);
- ServerAddress address2 = new ServerAddress ("192.168.1.137", 27017);
- ServerAddress ADDRESS3 = new ServerAddress ("192.168.1.138", 27017);
- Addresses.add (ADDRESS1);
- Addresses.add (ADDRESS2);
- Addresses.add (ADDRESS3);
- Mongoclient client = new Mongoclient (addresses);
- DB db = Client.getdb ( "test");
- Dbcollection coll = db.getcollection ( "TestDB");
- Basicdbobject object = new Basicdbobject ();
- Object.append ( "Test2", "Testval2");
- //Read operation read from replica node
- Readpreference preference = readpreference. Secondary ();
- DBObject DBObject = Coll.findone (object, null, preference);
- System. Out. println (DBObject);
- } catch (Exception e) {
- E.printstacktrace ();
- }
- }
- }
Reading parameters in addition to secondary altogether there are five parameters: primary, primarypreferred, secondary, secondarypreferred, nearest.
Primary: The default parameter, only read from the primary node;
primarypreferred: Most of the data is read from the primary node, and the data is read from the secondary node only when the primary node is unavailable.
Secondary: only read from the secondary node, the problem is that the data of the secondary node is "old" than the primary node data.
secondarypreferred: read from the secondary node, and read data from the master node when the secondary node is unavailable;
nearest: reads data from the node with the lowest network latency, whether it is the primary node or the secondary node.
Good, read and write separation well we can data shunt, reduce the pressure to solve the "master node read and write pressure is too big how to solve?" "This question. However, when our replica nodes increase, the replication pressure of the primary node will increase what is the solution? MongoDB has already had the corresponding solution.
Look at the picture:
The quorum node does not store data, but is responsible for the failover of the group vote, so that the pressure of data replication is less. is not thoughtful ah, a look at the development of MongoDB brothers well-known big Data architecture system, in fact, not just the main node, replica nodes, quorum node, as well as Secondary-only, Hidden, Delayed, non-voting.
secondary-only: cannot be a primary node, only as a secondary replica node, preventing some low performance nodes from becoming the primary node.
Hidden: This kind of node is not able to be made by the client IP reference, can not be set as the primary node, but voting, generally used to back up data.
Delayed: You can specify a time delay to synchronize data from the primary node. Mainly used to back up data, if the real-time synchronization, the accidental deletion of data immediately synchronized to the slave node, recovery and can not recover.
non-voting: secondary node without the right to vote, pure backup data node.
This entire MongoDB replica set has two questions to solve:
- is the primary node hanging up can I switch connections automatically? Manual switching is required.
- How to solve the high read and write pressure of the master node?
There are two further issues to be resolved:
- From the node each of the above data is a full copy of the database, from the node pressure will not be too large?
- Can you do this automatically when the data pressure is too high to support the machine?
Doing a replica set finds some more problems:
- Replica set failover, how is the master node elected? The ability to manually interfere with a single master node.
- The official says the number of replicas is best to be odd, why?
- How is the MongoDB replica set synchronized? What happens if the synchronization is not timely? Will there be inconsistencies?
- Will mongodb failover automatically happen without a reason? What conditions will trigger? Frequent triggering can lead to heavier system load
Java Program Connection MongoDB replica set test