The initial construction of the Amoeba environment was completed in the year of the dragon, and the read/write splitting and load balancing verification began in the year of the snake. I wish you all the best in the year of the snake. The previous work is just to introduce the Amoeba framework, but it does not meet the requirements of the read/write splitting scenario, because the most basic, there is only one SQL node. Therefore, we first need to add an SQL node
The initial construction of the Amoeba environment was completed in the year of the dragon, and the read/write splitting and load balancing verification began in the year of the snake. I wish you all the best in the year of the snake. The previous work is just to introduce the Amoeba framework, but it does not meet the requirements of the read/write splitting scenario, because the most basic, there is only one SQL node. Therefore, we first need to add an SQL node
The initial construction of the Amoeba environment was completed in the year of the dragon, and the read/write splitting and load balancing verification began in the year of the snake. I wish you all the best in the year of the snake. The previous work is just to introduce the Amoeba framework, but it does not meet the requirements of the read/write splitting scenario, because the most basic, there is only one SQL node.
Therefore, we need to add an SQL node first, and the specific operations will not be described in detail. Add, restart, MySQL Clluster cluster environment is as follows.
Modify the environment configuration dbServers. xml of Amoeba and add a dbServer node 10.4.44.200.
Restart the Amoeba service to make the configuration take effect.
In this case, you can use the previously written JDBC code to directly access the Amoeba service to complete data operations. You only need to modify the IP address and port (8066 by default.
Through tests, we found that by accessing the Amoeba service, we did perform load balancing on the SQL layer, observed the network indicators of the virtual machine, and found that the network load of the two SQL nodes was average.
SQL200 network load
Network load at the same time point of SQL201
Read/write splitting configuration.
${amoeba.home}/conf/rule.xml
${amoeba.home}/conf/ruleFunctionMap.xml
${amoeba.home}/conf/functionMap.xml
1500
server2
server2
multiPool
true
The key is to open the readPool and writePool configuration items that are commented out by default. Here, you can configure a single node name or enter it in dbServers. the name of the virtual node defined in xml. For example, the multiPool actually contains the server1 and server2 nodes. Amoeba reads data according to the load balancing policy.
Note: The preceding configuration allows you to quickly perform read/write splitting without data splitting.
Connect to the Amoeba node of 205 for read and write operations, observe the VM monitoring chart, you can see that when reading data, the load is shared on the server1 and server2 nodes, when writing data, only server2.
Note: Amoeba cannot do anything. Currently, transactions are not supported. stored procedures are not supported currently (will be supported in the near future) it is not suitable for data export from amoeba or query of large data volumes (for example, when a request returns more than million data records). Currently, database/table sharding is not supported, currently, amoeba only supports database sharding. Each split node must have the same database table structure.
We can recall from the fourth point that the test environment is based on multiple SQL nodes in the MySQL Cluster. The underlying Data node is naturally Data synchronization, it does not even mean that the database and table are consistent. OneCoder also believes that Amoeba's use scenario should be based on an independent MySQL node at the beginning of the design. This is also the verification work that OneCoder will consider next :)
Original article address: MySQL Cluster SQL node load balancing, read/write splitting verification-Based on Amoeba, thanks to the original author for sharing.