Introduction
Personally, Gluster has 5 basic components, namely: Mgmt,rpc SERVER,RPC client, NFS SERVER,NFS client
These 5 components make up the basic services of Gluster, so this chapter is a brief overview of what Mgmt did
Mgmt,nfs,rpc-server Use the same main function (Glusterfsd:main), so all 3 classes have the same loading process, and the approximate flow is as follows:
int main (int argc, char *argv[])
{
.......
CTX = Glusterfs_ctx_new ();
ret = Glusterfs_globals_init ();
ret = Glusterfs_ctx_defaults_init (CTX);
ret = Parse_cmdline (argc, argv, CTX);
ret = Logging_init (CTX);
Gf_proc_dump_init ();
ret = Create_fuse_mount (CTX);
ret = Daemonize (CTX); Become a daemon process
ret = Glusterfs_volumes_init (CTX); It's very important to do something in this function.
ret = Event_dispatch (Ctx->event_pool); Distribution events, reference 3.3
}
From the frequency of CTX, the head can think of CTX is a very important structure, it holds a lot of information, as to what, and then the subsequent chapters to expand, but in this main, all the previous calls are for 2 important functions to pave
1, Glusterfs_volumes_init ()
2, Event_dispatch ()
In Glusterfs_volumes_init () a different xlator (by Volfile-id, which determines which xlator to configure) is configured in Gluster, Xlator is a modular component. The form of so exists, and the storage-side path is
So loading different xlator is equivalent to loading a different GLUSTERFSD service. three different modes
The Gluster has 3 modes of operation, namely:
Gf_server_process,gf_glusterd_process,gf_client_process
Although the 3 commands point to the same file (see Figure 1-1), the file does have 3 modes
Static uint8_t
Gf_get_process_mode (char *exec_name)
{
Char *dup_execname = NULL, *base = NULL;
uint8_t ret = 0;
Dup_execname = Gf_strdup (exec_name);
Base = basename (Dup_execname);
if (!strncmp (base, "GLUSTERFSD", 10)) {
ret = gf_server_process; (Volume,nfs,rpc Server)
} else if (!strncmp (base, "Glusterd", 8)) {
ret = gf_glusterd_process; (Daemon)
} else {
ret = gf_client_process; (RPC Client)
}
Gf_free (Dup_execname);
return ret;
Parse command-line arguments
Using the ARGP package, parse the command line sequence, detailed reference
Http://www.gnu.org/software/libc/manual/html_node/Argp-Examples.html#Argp-Examples
#0 parse_opts (key=16777219, arg=0x0, state=0x7fffffffce70) at glusterfsd.c:780
#1 0x00007ffff62cfc9bin argp_parse () from/lib64/libc.so.6--It's a function of so.
#2 0x0000000000409001in parse_cmdline (argc=6, argv=0x7fffffffe578, ctx=0x617010) atglusterfsd.c:1793
#3 0x000000000040a1dfin Main (argc=6, argv=0x7fffffffe578) at glusterfsd.c:2296
Event_dispatch
was initialized in Glusterfs_ctx_defaults_init.
A lot of function pointers are used in gluster, so how to find the corresponding function of pointers is very helpful to the reading of code, the following 2 structs, the corresponding event function is determined
//event.h
struct Event_ops {
struct Event_pool * (*new) (int count,int eventthreadcount);
Int (*event_register) (structevent_pool *event_pool, int fd,
Event_handler_thandler,
void *data, intpoll_in, int poll_out);
Int (*event_select_on) (structevent_pool *event_pool, int fd, int idx,
int Poll_in,int poll_out);
Int (*event_unregister) (structevent_pool *event_pool, int fd, int idx);
Int (*event_unregister_close) (structevent_pool *event_pool, int fd,
int idx);
Int (*event_dispatch) (Structevent_pool *event_pool);
Int (*event_reconfigure_threads) (struct Event_pool *event_pool,
Intnewcount);
Int (*event_pool_destroy) (Structevent_pool *event_pool);
};
//event-epoll.c
struct Event_opsevent_ops_epoll = {
. New = Event_pool_new_epoll,
. Event_register = Event_register_epoll,
. event_select_on =event_select_on_epoll,
. Event_unregister = Event_unregister_epoll,
. Event_unregister_close = Event_unregister_close_epoll,
. Event_dispatch = Event_dispatch_epoll,
. Event_reconfigure_threads =event_reconfigure_threads_epoll,
. Event_pool_destroy = Event_pool_destroy_epoll
};
Regardless of that pattern, use epoll and distribute events in Event-dispatch
EVENT_POOL->OPS->EVENT_DISPACTH here is the invocation of a function pointer, referring to Structops's true point is event_dispatch_epoll, and using a separate thread worker to handle all events
Loading of the Xlator
By default, Glusterd uses/etc/glusterfs/glusterd.vol to initialize the Xlator
1:volume Management
2:type Mgmt/glusterd
3:optionrpc-auth.auth-glusterfs on
4:option Rpc-auth.auth-unixon
5:option Rpc-auth.auth-nullon
6:optionrpc-auth-allow-insecure on
7:optiontransport.socket.listen-backlog 128
8:option Event-threads 1
9:option Ping-timeout 0
10:option Transport.socket.read-fail-log off
11:optiontransport.socket.keepalive-interval 2
12:option Transport.socket.keepalive-time 10
13:option Transport-type RDMA
14:option Working-directory/var/lib/glusterd
15:end-volume
Where the XL point is the current xlaotor (MGMT), Glusterd.c:init ()
The RPC service was initialized in Glusterd.c:init ()
A stack structure consisting of different xlator, called a graph