After the cluster was last installed, I didn't have time to write a test article (see). Today I wrote an article on the preliminary test, if you still want to know how to handle the ipvs cluster in different situations, you can leave a message to me !!!
First, simulate the shutdown of the Master/Slave database to check the data synchronization:
1. Restart the master node:
Method ①:./pg_ctl stop-D ../data/
Waiting for server to shut down .................................. ............................. failed
Pg_ctl: server does not shut down
HINT: The "-m fast" option immediately disconnects sessions rather
Waiting for session-initiated disconnection.
Failed to show closing, but the database connection operation is unavailable and an error is displayed !!!
Method ②./pg_ctl stop-D ../data/-m fast
Close the database directly. No service is displayed when you connect to the database.
After the master node is restarted, the cluster function is not affected.
Method ③: kill-9 8581 8582 8584 8585 8586 8587 8589 8597 8669
Insert a data entry into one slave node;
(This is theoretically readable, but it is still inserted to facilitate the display)
Insert failed: ERROR: cannot execute INSERT in a read-only transaction
Start the master node, insert a piece of data, and verify the cluster function:
The cluster functions are in good condition!
2. Disable slave nodes
Close mode ①:./pg_ctl stop-D ../data_bac/-m fast
Insert a data entry to the master node, and then open the slave node to check whether the data is synchronized.
Data is synchronized successfully.
Close method ②: kill-9 19971 19972 19973 19974 19975 19976
After two pieces of data are inserted, the normal promoter node can check whether the data is synchronized:
You can see an unexpected death. Data is synchronized after restart.
3. Killing the master node during the insertion process
Close mode ①:./pg_ctl stop-D ../data_bac/-m fast
Insert 100000 data in a single thread mode and close the master node in the middle. After restarting, check whether the data of the master database is consistent with that of the slave database:
1. When disconnected, check that the data volume of both slave databases is 5719
2. Restart the primary database to view the data volume:
No data loss was found. To verify the correctness, we verified that no data loss occurred when every thread of 10 threads was 100000 concurrent.
Close method ②: kill-9 19971 19972 19973 19974 19975 19976
View the data volume of the slave database:
Start the master database to view the data volume of the master database:
It can be found that the data volume in the master database and slave database is not the same. Some people may say that the data is lost. Don't come to a conclusion too early. Let's take a look at whether the data volume in the slave database has changed:
We can think that when the master node unexpectedly dies, the wal log of the master database has not been sent to the slave Database Host, causing the master database data to be not synchronized, but after the master database is restarted, the master database resends the untransmitted wal logs and synchronizes the data again.
The above are some of the situations I have simulated clusters will encounter. Currently, I can only think of so many cases. If you have any suggestions, leave a message.
[Content navigation] |
Page 1: Simulate cluster status |
Page 1: Data processing capabilities |