DICOM: Profiling the Flag bit & Event for Web Server,mongoose in Orthanc (iii)

Source: Internet
Author: User

Background:

Orthanc is a new DICOM server introduced in this column, with a lightweight, rest-enabled feature that turns any computer running Windows and Linux systems into a dicom server, or Minipacs. Orthanc embedded in a variety of modules, database management is simple, and does not rely on third-party software. So through the analysis of Orthanc source code can learn to build a dicom system in various aspects, such as SQLite embedded database, googlelog Log library, DCMTK Medical Dicom Library, as well as the recent open source Web Server,mongoose to introduce.

In the previous blog post, it was simple to analyze the sequence of events triggered by the connection request in Mongoose, and the debug output was roughly in line with the *ns_accept-> given by Fossa official website (Ns_recv->ns_send->ns_poll ...). ->ns_close process, but the actual running time due to the network environment changes in real time, so the output of the debug log occasionally appear multiple ns_accept or multiple ns_poll and so on. At the end of the blog to understand the real cause of event triggering, you need to analyze the Ns_poll_server function source code, and then by analyzing the design of mongoose and fossa to have a more comprehensive understanding of event triggering . *

Mongoose Events

The description of mongoose in the official Notes document is:

Mongosoe has single-threaded, Event-driven, asynchronous, non-blocking core. This kernel is fossa. One single thread (single-threaded) refers to the Mg_poll_server loop in the main thread.

The mg_poll_server function iterates through all valid connections and monitors each connection socket through a Select asynchronous operation to complete an IO iteration. This is repeated until processing is complete. Each time a select returns, an IO operation is made for the socket (with data to be sent or received) that has changed status. However, Mg_poll_server itself does not complete the loop traversal and requires an external loop called Mg_poll_server to enable real-time monitoring of the connection state.

If you look at the code and find that mg_poll_server inside is simply calling the ns_mgr_poll function, let's look at Fossa's description of the function:

Fossa is a network library that supports multiple protocols, implements non-blocking, asynchronous IO processing, and provides event-based APIs. Fossa is used to declare and initialize event handlers, create connections, and finally implement event monitoring by looping through the Ns_mgr_poll function. Ns_mgr_poll iterates through all sockets, receives new connections, sends and receives data, closes connections, and invokes corresponding event handlers based on specific events .

Fossa requires that each connection need to be bound to the event handler function, which is the handler function, which is implemented by the user custom. Event handling is the core element of the Fossa application-the function of the program is set. Mongoose is a package for fossa, which specifies the default handlers for various events, so copy and paste the sample code from the Mongoose official documentation to open a simple web Server with the following code:

#include "mongoose.h"int main(void) {  struct mg_server *server = mg_create_server(NULL, NULL);  mg_set_option(server, "document_root", ".");  // Serve current directory  mg_set_option(server, "listening_port", "8080");  // Open port 8080  for (;;) {    mg_poll_server(server, 1000);   // Infinite loop, Ctrl-C to stop  }  mg_destroy_server(&server);  return 0;}

The above code is not as described in the Fossa official website, given the user-defined event handler can successfully open the Web server (for details, please refer to the blog dicom: Anatomy of Orthanc in the Web server, Mongoose), This shows that mongoose in the encapsulation of the fossa to give the default event handler function, that is, Mg_ev_handler, within the function of the fossa to specify the various events of the process, because the code is too long here is not posted out, For details, refer to Link:mg_ Ev_handler. So when using mongoose, the main concern is the custom event, in addition mongoose the Fossa event is a simple re-encapsulation, to Mg_ the beginning to mark the event, such as Mg_auth, Mg_request, Mg_connect, MG _reply.

Fossa logo

As explained above, the focus of mongoose is to encapsulate the custom events of Fossa two times, and its main contribution is to design the default processing flow function of the Fossa event, which is the Mg_ev_handlermentioned above. So what is the real trigger to solve the question in the last blog post ? . By analyzing the mg_ev_handler source code can only be understood the Fossa event trigger mechanism, did not really understand the reason. Therefore, in order to solve doubts, we need to analyze Fossa's processing core, namely Ns_mgr_poll . In fossa, each connection contains the corresponding flag bit field, which is the flag bit, status bit, or feature bit . The flag bit is intended to be used to differentiate connections (where the connection is a noun, representing all requests related to fossa). In fossa, the connection is divided into three categories, namely inbound, outbound, and listening, at different stages of the entire life cycle, represented by a flag for each stage. Ns_mgr_poll is based on the flag bit to handle various connections, such as adding new connections, starting to read data, starting to send data, closing connections, and so on, because the flag bit separates the stages of the connection, So that the process of different stages can be modeled, that is, events .

So to figure out the triggering process of the previous blog post event, it is essential to understand how fossa and mongoose use the flag bit to represent each phase of the connection. Fossa official documentation indicates that each connection has a flag bit field. Fossa defines a variety of flags for different protocols, some of which are set by fossa, and some flags need to be set by an external user-defined event handler (to accomplish the user's interaction with fossa). The main flags are listed below:

    • Nsf_finished_sending_data
    • Nsf_buffer_but_dont_send
    • nsf_close_immediately
    • Nsf_user1/nsf_user2/nsf_user3/nsf_user4

The above flags are set by an external user-defined event handler, and the following is a look at the flag bits of the fossa internal setting,

    • Nsf_ssl_handshake_down
    • Nsf_connecting
    • Nsf_listening
    • Nsf_websocket_no_defrag
    • Nsf_is_websocket

Depending on the flag bit name, it is generally assumed that the flag bit is related to the connection establishment process or connection specific state , so the flag bit field is concerned with the processing flow of the HTTP Web server. Is the core logic inside the Fossa and mongoose open source libraries, so it needs to be implemented fossa internal . -The Open Source Library often implements the process part of the protocol itself, leaving only the customizable portions to the user's own definition.

To find out the direction of the problem, the following is an example of Mongoose official documents for instance testing:

Instance test

For the convenience of viewing, once again the code in the official installation instructions is posted here, as follows:

#include "mongoose.h"int main(void) {  struct mg_server *server = mg_create_server(NULL, NULL);  mg_set_option(server, "document_root", ".");  // Serve current directory  mg_set_option(server, "listening_port", "8080");  // Open port 8080  for (;;) {    mg_poll_server(server, 1000);   // Infinite loop, Ctrl-C to stop  }  mg_destroy_server(&server);  return 0;}

In addition, in order to keep track of the flag bits in the Fossa total ns_mgr_poll function, the Ns_mgr_poll code in MONGOOSE.C is modified and the corresponding debug output information is added. The specific changes are as follows:

  time_t ns_mgr_poll (struct ns_mgr *mgr, int milli) {int loop=0;  struct ns_connection *conn, *tmp_conn;  struct Timeval TV;  Fd_set Read_set, Write_set;  sock_t max_fd = Invalid_socket;  time_t current_time = time (NULL);  Fd_zero (&read_set);  Fd_zero (&write_set);  Ns_add_to_set (Mgr->ctl[1], &read_set, &MAX_FD); for (conn = mgr->active_connections; conn! = NULL; conn = tmp_conn) {printf ("The for loop in adding sock or conn Section was%d times\n ", loop++);//just for debuggingtmp_conn = Conn->next;if (! ( Conn->flags & (Nsf_listening | nsf_connecting)) {printf ("for the Flag--%d--, for the sock--%d--, call user Ev_handler for ns_poll\n", conn->f Lags, (conn->sock!=null?conn->sock:-1));//just for debugging Ns_call (Conn, Ns_poll, &current_time);} if (! (        Conn->flags & Nsf_want_write) {//dbg (("%p Read_set", conn)); printf ("For the Flag--%d--, for the sock--%d--, call Ns_add_to_set function!\n", Conn->flags, (conn->sock!=null?conn->sock:-1));//just for debugging Ns_add_to_set (Conn->sock, &read_set, &MAX_FD);} if ((Conn->flags & nsf_connecting) &&! ( Conn->flags & Nsf_want_read) | | (Conn->send_iobuf.len > 0 &&!) (Conn->flags & nsf_connecting) &&! (Conn->flags & Nsf_buffer_but_dont_send)))             {//dbg ("%p Write_set", conn)); printf ("For the Flag--%d--2--, for the sock--%d--call Ns_add_to_set function!\n", Conn->flags, (conn->sock!=null? conn->sock:-1));//just for debugging Ns_add_to_set (Conn->sock, &write_set, &MAX_FD);} if (Conn->flags & nsf_close_immediately) {printf ("for the Flag--%d--, for the sock--%d--call Ns_close_co  nn function!\n ", Conn->flags, (conn->sock!=null?conn->sock:-1));//just for debugging Ns_close_conn (conn);}  } tv.tv_sec = milli/1000;  tv.tv_usec = (milli% 1000) * 1000;  Loop=0; if (select (int) max_fd + 1, &read_set, &write_set, NULL, &tV) > 0) {//select () might has been waiting for a long time, reset current_time//today to prevent last_io_time being SE T to the Past.current_time = time (NULL);//Read Wakeup Messagesif (mgr->ctl[1]! = Invalid_socket &&fd_isset (Mgr  ->CTL[1], &read_set) {struct ctl_msg ctl_msg;  int len = (int) recv (mgr->ctl[1], (char *) &ctl_msg, sizeof (CTL_MSG), 0);  Send (Mgr->ctl[1], ctl_msg.message, 1, 0); if (len >= (int) sizeof (ctl_msg.callback) && ctl_msg.callback! = NULL) {struct Ns_connection *c;for (c = Ns_nex T (Mgr, NULL); c! = NULL;  c = Ns_next (Mgr, c)) {Ctl_msg.callback (c, Ns_poll, ctl_msg.message);} }};FOR (conn = mgr->active_connections; conn! = NULL; conn = tmp_conn) {printf ("The For loop in select"  D times\n ", loop++);  Tmp_conn = conn->next; if (Fd_isset (Conn->sock, &read_set)) {if (Conn->flags & nsf_listening) {printf ("for the Flag--% D--, for the sock--%d--, nsf_listening!\n ", Conn->flags,(conn->sock!=null?conn->sock:-1)); /just for debugging if (Conn->flags & nsf_udp) {printf ("for the Flag--%d--, for the sock--%d--, Cal L NS_HANDLER_UDP function!\n ", Conn->flags, (conn->sock!=null?conn->sock:-1));//Just for Debuggingns_handle  _UDP (conn); } else {//We ' re not looping here, and accepting just one connection at//a time.              The reason is this eCos does not respect non-blocking//flags on a listening sockets and hangs in a loop. printf ("For the Flag--%d--, for the sock--%d--call Accept_conn function!\n", Conn->flags, (conn->sock!=null?conn-  >sock:-1));//just for Debuggingaccept_conn (conn);          }} else {conn->last_io_time = Current_time; printf ("For the Flag--%d--, for the sock--%d--call Ns_read_from_socket function!\n", Conn->flags, (conn->sock!=  null?conn->sock:-1));//just for debugging Ns_read_from_socket (conn);} } if (Fd_isset (Conn->sock, &write_set)) {if (Conn->flags & nsf_connecting{printf ("for the Flag--%d--, for the sock--%d--call Ns_read_from_socket function!\n", Conn->flags, (conn- >sock!=null?conn->sock:-1));//just for debugging Ns_read_from_socket (conn);} else if (! (          Conn->flags & nsf_buffer_but_dont_send) {conn->last_io_time = Current_time; printf ("For the Flag--%d--, for the sock--%d--call Ns_write_to_socket function!\n", Conn->flags, (conn->sock!=  null?conn->sock:-1));//just for debugging Ns_write_to_socket (conn);}  }}} Loop=0;  for (conn = mgr->active_connections; conn! = NULL; conn = tmp_conn) {printf ("The For Loop %d times\n ", loop++); tmp_conn = Conn->next;if (Conn->flags & nsf_close_immediately) | | (Conn->send_iobuf.len = = 0 && (conn->flags & Nsf_finished_sending_data))) {printf ("for the Flag--%d--2--, for the sock--%d--call Ns_close_conn function!\n", Conn->flags, (conn-&gt ; sock!=null?conn->sock:-1));//just for DebUgging Ns_close_conn (conn);} } return current_time;}

"Note": the code in the printf output statement is for the convenience of debugging added, to be tested after you delete yourself, so as not to affect the performance of Mongoose server.

Test results

The internal structure of the ns_mgr_poll function shows that the function is divided into three main function modules:

1. Link configuration phase (that is, add a new connection to the server-side linked list, and set a readable or writable check for each connection)


2. Link Monitoring phase (use Select async model to monitor the read and write status of each connection)


3. Link cleanup phase (see if you need to close depending on the actual status of the connection)

The results in the debugging information correspond to the above three classes of one by one, in order to check the debugging results, different background colors are used for different debugging information. As shown in the following:

This diagram shows the state of mongoose after the initialization of the WEB server, where the Mg_server connection list contains only the listening port at the time of initialization, that is, listening connection

Next, enter http://localhost:8080 in the browser, after the return of the debug log results such as:

We can see that the original listening connection detected a new link access in Select, that is, Inbound connection. In the link configuration phase , the newly accepted inbound is added to the connection list on the server, and the insert location is the header , and the For loop output log from the next stage of the link monitoring phase determines that the insertion position is the table header.

For data receiving and sending, the fossa internally uses a buffering mechanism, as shown in the buffer structure:

After the data receive and send processing is complete, at first the connection list of the Mongoose service also holds three connections, and over time, two connections except listening connection are closed one by one, Mongoose Web server reverts to the initialization state.

From the above debug log can be seen in the ns_mgr_poll function inside the three main modules are based on the link list of each connection flag bit to classify processing, to achieve port monitoring , connection access , receiving and sending data , connection shutdown , and other functions. This is the part of our last blog post hope to further study, this blog post just analyzed debugging Mongoose official website Test example, as for the original rational things may be involved in the HTTP protocol and specific implementation of the timing diagram and other content, specific details will be given in subsequent articles, please look forward to.

The first time to use markdown write Csdn blog, do not know how the effect, ^_^





[Email protected]

Date: 2015-02-10

DICOM: Profiling the Flag bit & Event for Web Server,mongoose in Orthanc (iii)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.