Reprinted: http://www.iteye.com/topic/643187
Scalability: depending on the system load, you can add or delete service nodes during operation to change the system processing scale.
Mnesia is a distributed database module consisting of multiple nodes. The data table location is transparent to the application. With this feature, it is easy to build a highly scalable system.
Rabbitmq is a distributed message-oriented middleware that can be built by multiple nodes in the mnesia-cluster mechanism.
Rabbitmq initializes mnesia on a node as follows:
1. Start mnesia and try to connect to other nodes of the system;
2. If you cannot connect to any node, it indicates that the node is the first startup node in the system;
. In this case, you can check the old tables of the database and wait for connections from other nodes;
3. If you connect to some nodes, these nodes will synchronize and merge schema tables;
3. 1. After creating a copy of the schema table and other data tables on the current node, the node data will be synchronized and consistent with the connected node data;
Java code
% Rabbit_mnesia.erl
Init_db (clusternodes)->
Case mnesia: change_config (extra_db_nodes, clusternodes -- [node ()])
End
% Rabbit_mnesia.erl
Init_db (clusternodes)->
Case mnesia: change_config (extra_db_nodes, clusternodes -- [node ()])
End
Start mnesia and connect to other nodes.
Java code
% Rabbit_mnesia.erl
Case mnesia: change_config (extra_db_nodes, clusternodes -- [node ()])
{OK, []}->
% Rabbit_mnesia.erl
Case mnesia: change_config (extra_db_nodes, clusternodes -- [node ()])
{OK, []}->
If you cannot connect to any node, check whether the old table in the current database directory is correct. (Create a new table if the database directory is empty)
Java code
% Rabbit_mnesia.erl
Case mnesia: change_config (extra_db_nodes, clusternodes -- [node ()])
{OK, [_ | _]}->
Isdisknode = clusternodes = [] orelse % clusternodes = [] master node
Lists: Member (node (), clusternodes ),
OK = wait_for_replicated_tables (),
OK = create_local_table_copy (schema, disc_copies ),
OK = create_local_table_copies (Case isdisknode
% Rabbit_mnesia.erl
Case mnesia: change_config (extra_db_nodes, clusternodes -- [node ()])
{OK, [_ | _]}->
Isdisknode = clusternodes = [] orelse % clusternodes = [] master node
Lists: Member (node (), clusternodes ),
OK = wait_for_replicated_tables (),
OK = create_local_table_copy (schema, disc_copies ),
OK = create_local_table_copies (Case isdisknode
After successfully connecting to some nodes, mnesia exchanges the database metadata and waits for tables and clusters with disk copies (disc_copies) on the current node to complete synchronization.
OK = wait_for_replicated_tables (),
If the current node is a data storage node, you must create disk copies of some tables on the node.
OK = create_local_table_copies (Case isdisknode
I did some simple experiments to observe the characteristics of mnesia when they are connected to each other.
Java code
(A @ localhost) 1> mnesia: create_schema ([node ()]).
OK
(A @ localhost) 2> mnesia: Start ().
OK
(A @ localhost) 3> mnesia: create_table (user, [{disc_copies, [node ()]}]).
{Atomic, OK}
(A @ localhost) 4> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
User: with 0 records occupying 304 words of MEM
Schema: with 2 records occupying 524 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/a" is used.
Use fallback at restart = false
Running dB nodes = [A @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[{A @ localhost, disc_copies}] = [schema, user]
3 transactions committed, 0 aborted, 0 restarted, 1 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
(A @ localhost) 1> mnesia: create_schema ([node ()]).
OK
(A @ localhost) 2> mnesia: Start ().
OK
(A @ localhost) 3> mnesia: create_table (user, [{disc_copies, [node ()]}]).
{Atomic, OK}
(A @ localhost) 4> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
User: with 0 records occupying 304 words of MEM
Schema: with 2 records occupying 524 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/a" is used.
Use fallback at restart = false
Running dB nodes = [A @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[{A @ localhost, disc_copies}] = [schema, user]
3 transactions committed, 0 aborted, 0 restarted, 1 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
Node A has two disk tables, schema and user.
Java code
(B @ localhost) 1> mnesia: Start ().
OK
(B @ localhost) 2> mnesia: change_config (extra_db_nodes, ['a @ localhost', 'B @ localhost', 'c @ localhost'] -- [node ()]).
{OK, [A @ localhost]}
(B @ localhost) 1> mnesia: Start ().
OK
(B @ localhost) 2> mnesia: change_config (extra_db_nodes, ['a @ localhost', 'B @ localhost', 'c @ localhost'] -- [node ()]).
{OK, [A @ localhost]}
Node B tries to connect node A and node C, and finally connects to the running node. What is the status of Node B after connection?
Java code
(B @ localhost) 3> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
Schema: with 2 records occupying 533 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is not used.
Use fallback at restart = false
Running dB nodes = [A @ localhost, B @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = [user]
Ram_copies = [schema]
Disc_copies = []
Disc_only_copies = []
[{A @ localhost, disc_copies}] = [user]
[{A @ localhost, disc_copies}, {B @ localhost, ram_copies}] = [schema]
4 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
(B @ localhost) 3> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
Schema: with 2 records occupying 533 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is not used.
Use fallback at restart = false
Running dB nodes = [A @ localhost, B @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = [user]
Ram_copies = [schema]
Disc_copies = []
Disc_only_copies = []
[{A @ localhost, disc_copies}] = [user]
[{A @ localhost, disc_copies}, {B @ localhost, ram_copies}] = [schema]
4 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
We can see that the user table is remote, and there is a disk copy on node A, and there is no type of copy on Node B. The schema tables have been merged and stored on nodes A and B respectively.
At this time, the user table can be operated on Node B, and the table location is transparent.
Java code
(B @ localhost) 4> mnesia: change_table_copy_type (schema, node (), disc_copies ).
{Atomic, OK}
(B @ localhost) 5> mnesia: add_table_copy (user, node (), disc_copies ).
{Atomic, OK}
(B @ localhost) 6> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
User: with 0 records occupying 304 words of MEM
Schema: with 2 records occupying 542 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is used.
Use fallback at restart = false
Running dB nodes = [A @ localhost, B @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[{A @ localhost, disc_copies}, {B @ localhost, disc_copies}] = [schema, user]
6 transactions committed, 0 aborted, 0 restarted, 2 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
(B @ localhost) 4> mnesia: change_table_copy_type (schema, node (), disc_copies ).
{Atomic, OK}
(B @ localhost) 5> mnesia: add_table_copy (user, node (), disc_copies ).
{Atomic, OK}
(B @ localhost) 6> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
User: with 0 records occupying 304 words of MEM
Schema: with 2 records occupying 542 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is used.
Use fallback at restart = false
Running dB nodes = [A @ localhost, B @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[{A @ localhost, disc_copies}, {B @ localhost, disc_copies}] = [schema, user]
6 transactions committed, 0 aborted, 0 restarted, 2 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
After creating a disk copy of the schema table and user table on Node B, it is found that the user table is no longer a remote attribute and can be directly read from the local machine.
First exit Node B, then exit node A, and then only restart Node B.
Java code
(B @ localhost) 1> mnesia: Start ().
OK
(B @ localhost) 2> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
Schema: with 2 records occupying 542 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is used.
Use fallback at restart = false
Running dB nodes = [B @ localhost]
Stopped dB nodes = [A @ localhost]
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[] = [User]
[{B @ localhost, disc_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
(B @ localhost) 3> mnesia: dirty_read (user, key ).
** Exception Exit: {aborted, {no_exists, [user, key]}
In function mnesia: Abort/1
(B @ localhost) 1> mnesia: Start ().
OK
(B @ localhost) 2> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
Schema: with 2 records occupying 542 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is used.
Use fallback at restart = false
Running dB nodes = [B @ localhost]
Stopped dB nodes = [A @ localhost]
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[] = [User]
[{B @ localhost, disc_copies}] = [schema]
2 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK
(B @ localhost) 3> mnesia: dirty_read (user, key ).
** Exception Exit: {aborted, {no_exists, [user, key]}
In function mnesia: Abort/1
The User table is unavailable at this time. Because node A and Node B jointly maintain the consistency of the User table, but Node B exits first, the final state of the User table is determined by node.
After node A is started, the user table becomes available.
Java code
(A @ localhost) 1> mnesia: Start ().
(A @ localhost) 1> mnesia: Start ().
Java code
(B @ localhost) 4> mnesia: Info ().
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Maid <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
User: with 0 records occupying 304 words of MEM
Schema: with 2 records occupying 542 words of MEM
==> System info in version "4.4.10", debug level = none <==
Opt_disc. directory "/home/hwh/B" is used.
Use fallback at restart = false
Running dB nodes = [A @ localhost, B @ localhost]
Stopped dB nodes = []
Master node tables = []
Remote = []
Ram_copies = []
Disc_copies = [schema, user]
Disc_only_copies = []
[{A @ localhost, disc_copies}, {B @ localhost, disc_copies}] = [schema, user]
3 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
OK