First, how to work
An XMPP domain is servo by one or more ejabberd nodes. These nodes may be running on different machines that are connected over the network. They must all have the ability to connect to port 4369 on all other nodes, and must have the same magic cookie (see ERLANG/OTP documentation, in other words, the file ~ejabberd/.erlang.cookie must be the same on all nodes). This is necessary because all nodes exchange information about connected users, S2S connections, registered services, and so on. Each of the EJABBERD nodes has the following four modules:
1, Router
This module is the main router of each node's XMPP package. It routes them based on their target domain. It uses a global routing table. In this routing table, the domain of the destination of the package is searched, and if found, the package is routed to the appropriate process. If not found, it is sent to S2s manager.
2. Local Router
This module routes packets that have a destination domain equal to one of the server's host names. If the destination Jid has a non-empty user part, it is routed to session manager, whereas its processing depends on its contents.
3. Session Manager
This module routes packages to local users. It looks through a presence information table to find out which user resource a package must be sent to. The package is then either routed to the appropriate C2S process, either stored in an offline storage offline storage, or bounced back.
4. S2S Manager
This module routes packets to other XMPP servers. First, it checks to see if a S2S connection exists from the source domain of the package to the destination domain. If there is, S2S Manager routes This package to the process of servo this connection, and vice versa opens a new connection.
Second, cluster configuration
Suppose you have configured Ejabberd on a machine named (first), and you need to configure the other one to do a ejabberd cluster. Then follow the steps below:
(1) Copy the ~ejabberd/.erlang.cookie file from the first machine to the second machine.
(or) You can also add the '-setcookie content_of_.erlang.cookie ' option to all of the following ' Erl ' commands.
(2) on the second machine, in the Ejabberd working directory, run the following command with the Ejabberd waiting process user:
erl -sname ejabberd \
-mnesia dir "/var/lib/ejabberd/" ' \
-mnesia extra_db_nodes "[' [email protected] ']"  \ 
-s mnesia
This will start the Mnesia service to the same database as [email protected]. You can run the command ' Mnesia:info (). ' Check it out. You should see many remote tables and rows, similar to the following: note: The Mnesia directory may be different in your system. Be aware that Ejabberd expects to install Mnesia by default, with no parameters called Ejabberdctl and then it will explicitly help, including Mnesia database spool directory.
Running DB nodes = [ejabberd@first, Ejabberd@second]
(3) Now run the following command under the same ' Erl ' session:
Mnesia:change_table_copy_type(schema, node(), Disc_copies).
This will establish local disk storage for the database. (or) on the second node, the storage type for modifying the scheme table through Web Management is ' RAM and disc copy '.
(4) Now you can add more tables to this node, using ' mnesia:add_table_copy ' or ' mnesia:change_table_copy_type ' above (just replace ' schema ' with other table names, and ' Disc_ Copies ' can be replaced with ' ram_copies ' or ' disc_only_copies ').
Which table is copied, depending on your needs, you can get from ' mnesia:info (). ' command to get some hints by looking at the size of each table at ' first ' and the default storage type.
Copying a table makes the query for this table of the node faster. On the other hand, writes will be slower. And of course, if one of the machines is copied, the other replication will be used.
It will be helpful to take a look at section 5.3 (table fragment) of the Mnesia User Guide. (or) Same as previous entries, but used for different tables.
(5) Run ' init:stop (). ' Or just ' Q (). ' To exit the Erlang shell. This may take some time if Mnesia has not transmitted and processed all data from first.
(6) Now run Ejabberd on the second machine with a configuration similar to the first machine: You may not need to repeat the ' ACL ' and ' access ' options as they will be obtained from the first machine; And MOD_IRC should only be activated on a single machine in the cluster.
You can repeat these steps on other machines to serve this domain.
Third, service load balancing
1. Domain Load Balancing mechanism
Ejabberd includes a mechanism to load balance the components that are plugged into a ejabberd cluster. It means that you can insert one or more instances of the same component in each Ejabberd cluster and the traffic will be automatically distributed.
The default distributed mechanism attempts to deliver to a local instance of the component. If more than one local instance is available, pick an instance randomly. If no on-premises instances are available, select a remote component instance at random.
If you need a different behavior, you can modify the load balancing behavior through the option domain_balancing. The syntax for this option is as follows:
{domain_balancing, "component.example.com", Balancingcriteria}.
Multiple load balancing standards are available:
(1) Destination: Full Jid using the to property of the package.
(2) Source: Full Jid using the From property of the package.
(3) Bare_destination: pure Jid (no resources) using the to property of the package.
(4) Bare_source: pure Jid with the From property of the package (no resources).
If this value of the corresponding standard is the same, then the same component instance in the cluster will be used.
2. Load Balancer Bucket
When a given component has a risk of failure, domain equalization can lead to service hassles. If a component fails, the service will not work correctly unless the session is re-balanced.
In this case, it is best to limit the problem to those sessions that are processed by the failed component. This is what the domain_balancing_component_number option does, so that the load balancing mechanism is not dynamic, but instead sticks to a fixed number of component instances.
The syntax is:
{domain_balancing_component_number, "component.example.com", Number}.
Ejabberd Source Code parsing Prelude--Cluster