This article describes the socket of zeromq in req and rep.
This is a classic request-reply example.
The Code is also very simple:
/// Hello World server // binds rep socket to TCP: // *: 5555 // expects "hello" from client, replies with "world" // # include <zmq. h> # include <stdio. h> # include <unistd. h> # include <string. h> int main (void) {void * context = zmq_init (1); // create a context and initialize an io_thread // socket to talk to clientsvoid * responder = zmq_socket (context, zmq_rep); // create a rep-type socketzmq_bind (responder, "TCP: // *: 5555"); // bind it to the port and access the while (1) In io_thread) {// wait for next request from clientzmq_msg_t request; // create the message structure zmq_msg_init (& request); // Initialize an empty message zmq_recv (responder, & request, 0 ); // receive the message printf ("received Hello \ n") from the pipeline; zmq_msg_close (& request); // destroy the message // do some 'work' sleep (1 ); // send reply back to clientzmq_msg_t reply; // create the structure of the reply message zmq_msg_init_size (& reply, 5 ); // initialize a five-byte message to accommodate "world" memcpy (zmq_msg_data (& reply), "world", 5); // copy the message to zmq_send (responder, & reply, 0); // send a message to the MPs queue. Wait for io_thread to read the message from the MPs queue and send zmq_msg_close (& reply);} // we never get here but if we did, this wocould be how we endzmq_close (responder); zmq_term (context); Return 0 ;}
/// Hello World client // connects req socket to TCP: // localhost: 5555 // sends "hello" to server, expects "world" Back // # include <zmq. h> # include <string. h> # include <stdio. h> # include <unistd. h> int main (void) {void * context = zmq_init (1); // create a context and initialize an io_thread // socket to talk to serverprintf ("connecting to hello World server... \ N "); void * requester = zmq_sockets (context, zmq_req); zmq_connect (requester," TCP: // localhost: 5555 "); // connect to the endpoint int request_nbr in io_thread; For (request_nbr = 0; request_nbr! = 10; request_nbr ++) {// send 10 times zmq_msg_t request; // create the request message structure zmq_msg_init_size (& request, 5 ); // initialize the request message to five Bytes: memcpy (zmq_msg_data (& request), "hello", 5 ); // set the request message content to "hello" printf ("sending HELLO % d... \ N ", request_nbr); zmq_send (requester, & request, 0); // send a message to the MPs queue. Wait for io_thread to read the message from the MPs queue and send zmq_msg_close (& request ); // destroy the request message zmq_msg_t reply; // create the message structure zmq_msg_init (& reply); // initialize the Null message zmq_recv (requester, & reply, 0); // receives the message printf ("received world % d \ n", request_nbr) from the pipeline; zmq_msg_close (& reply ); // destroy the reply message} zmq_close (requester); zmq_term (context); Return 0 ;}
From the analysis of previous blogs, you should understand the following points:
1. When the context is created, a specified number of io_thread will be created. Here is one. The io_thread adopts the reactor mode and uses poller to continuously poll read/write events.
2. When bind socket is started, an io_thread will be selected to call accept (). When a connection is accept, the peer identity will be exchanged.
3. After the identity is switched, io_thread creates a session and attach it to two pipelines. When the socket calls send () and Recv (), it interacts with the corresponding pipeline.
4. io_thread when poller polls a readable event, it will write data to the pipeline or read data from the pipeline when there is a writable event.
OK. If you do not understand this, go back to the source code and read the previous article. If you still do not understand it, contact me.
Next, let's look at req and REP:
First, we will introduce the message encapsulation formats of req and REP:
We have learned about multipart messages in zeromq from the previous blog, and each socket in zeromq encapsulates the corresponding messages when sending and receiving them, add the corresponding message header to the message header. The flag of these message headers is zmq_msg_more, and a multipart message is formed with the original MSG.
The message encapsulation of req and rep is as follows:
1. Send a message using Req:
Original message:
Messages encapsulated by Req:
We can see that a 0-byte message part is added when req is sent.
Let's take a look at the source code:
Int zmq: req_t: xsend (zmq_msg_t * MSG _, int flags _) {// if we 've sent a request and we still havene' t got the reply, // We can't send another request. if (mongoing_reply) {errno = EFSM; Return-1;} // First part of the request is empty message part (stack bottom ). if (message_begins) {// Add the prefix zmq_msg_t prefix of the empty message part; int rc = zmq_msg_init (& prefix); zmq_assert (rc = 0); prefix. f Lags | = zmq_msg_more; rc = xreq_t: xsend (& prefix, flags _); If (RC! = 0) return RC; message_begins = false;} bool more = MSG _-> flags & zmq_msg_more; int rc = xreq_t: xsend (MSG _, flags _); // send the message if (RC! = 0) return RC; // if the request was fully sent, flip the FSM into reply-refreshing state. If (! More) {mongoing_reply = true; message_begins = true;} return 0 ;}
Regardless of the details of the xreq_t: xsend (2) function, we will talk about it in the following sections. Here we will first understand it as sending a message, so the code here is easy to understand, empty message part is added before the original message. In addition, the two flags of mongoing_reply and message_begins are used to control message sending and receiving.
2. Rep receives the message:
When receiving a message, rep keeps reading every part of the message, knows to find the emtpy message part, and puts the header (including the empty message part) on the empty message part ), sends to the corresponding output pipeline for the reply message header.
The source code is as follows:
Int zmq: rep_t: xrecv (zmq_msg_t * MSG _, int flags _) {// if we are in middle of sending a reply, we cannot receive next request. if (sending_reply) {errno = EFSM; Return-1;} If (request_begins) {// copy the backtrace stack to the reply pipe. bool Bottom = false; while (! Bottom) {// read the message until the empty message part is found // todo: What if request can be read but reply pipe is not // ready for writing? // Get next part of the backtrace stack. Int rc = xrep_t: xrecv (MSG _, flags _); If (RC! = 0) return RC; If (MSG _-> flags & zmq_msg_more) {// empty message part delimits the traceback stack. bottom = (zmq_msg_size (MSG _) = 0); // check whether it is empty message part // push it to the reply pipe. rc = xrep_t: xsend (MSG _, flags _); // send the message header to the reply pipeline as the message header of the reply message zmq_assert (rc = 0 );} else {// If the traceback stack is malformed, discard anything // already sent to pipe (we're at end of invalid m Essage). Rc = xrep_t: rollback (); // message Encapsulation Format: Invalid, rollback message. Zmq_assert (rc = 0) ;}} request_begins = false;} // now the routing info is safely stored. get the first part // of the message payload and exit. int rc = xrep_t: xrecv (MSG _, flags _); // receives the message if (RC! = 0) return RC; // If whole request is read, flip the FSM to reply-sending State. If (! (MSG _-> flags & zmq_msg_more) {sending_reply = true; request_begins = true;} return 0 ;}
However, there is a detail here, because the message is sent indirectly through an asynchronous pipeline, and a rep may have multiple req, so attach multiple pipelines, therefore, the pipeline will be distinguished based on the identity of each pipeline, and the peer identity will be obtained when Rep receives the message so that the reply message can be placed in the corresponding reply pipeline mentioned above.
The code may be easy to understand:
int zmq::xrep_t::xrecv (zmq_msg_t *msg_, int flags_){ // If there is a prefetched message, return it. if (prefetched) { zmq_msg_move (msg_, &prefetched_msg); more_in = msg_->flags & ZMQ_MSG_MORE; prefetched = false; return 0; } // Deallocate old content of the message. zmq_msg_close (msg_); // If we are in the middle of reading a message, just grab next part of it. if (more_in) { zmq_assert (inpipes [current_in].active); bool fetched = inpipes [current_in].reader->read (msg_); zmq_assert (fetched); more_in = msg_->flags & ZMQ_MSG_MORE; if (!more_in) { current_in++; if (current_in >= inpipes.size ()) current_in = 0; } return 0; } // Round-robin over the pipes to get the next message. for (int count = inpipes.size (); count != 0; count--) { // Try to fetch new message. if (inpipes [current_in].active) prefetched = inpipes [current_in].reader->read (&prefetched_msg); // If we have a message, create a prefix and return it to the caller. if (prefetched) { int rc = zmq_msg_init_size (msg_, inpipes [current_in].identity.size ()); zmq_assert (rc == 0); memcpy (zmq_msg_data (msg_), inpipes [current_in].identity.data (), zmq_msg_size (msg_)); msg_->flags |= ZMQ_MSG_MORE; return 0; } // If me don't have a message, mark the pipe as passive and // move to next pipe. inpipes [current_in].active = false; current_in++; if (current_in >= inpipes.size ()) current_in = 0; } // No message is available. Initialise the output parameter // to be a 0-byte message. zmq_msg_init (msg_); errno = EAGAIN; return -1;}
The red part is the first message header received. In this example, when empty message part is used, the identity prefix is added to the message and returned to caller.
After receiving the message, the message is sent to the MPs queue and xreq_t: xsend (2 ):
int zmq::xrep_t::xsend (zmq_msg_t *msg_, int flags_){ // If this is the first part of the message it's the identity of the // peer to send the message to. if (!more_out) { zmq_assert (!current_out); // If we have malformed message (prefix with no subsequent message) // then just silently ignore it. if (msg_->flags & ZMQ_MSG_MORE) { more_out = true; // Find the pipe associated with the identity stored in the prefix. // If there's no such pipe just silently ignore the message. blob_t identity ((unsigned char*) zmq_msg_data (msg_), zmq_msg_size (msg_)); outpipes_t::iterator it = outpipes.find (identity); if (it != outpipes.end ()) { current_out = it->second.writer; zmq_msg_t empty; int rc = zmq_msg_init (&empty); zmq_assert (rc == 0); if (!current_out->check_write (&empty)) { it->second.active = false; more_out = false; current_out = NULL; rc = zmq_msg_close (&empty); zmq_assert (rc == 0); errno = EAGAIN; return -1; } rc = zmq_msg_close (&empty); zmq_assert (rc == 0); } } int rc = zmq_msg_close (msg_); zmq_assert (rc == 0); rc = zmq_msg_init (msg_); zmq_assert (rc == 0); return 0; } // Check whether this is the last part of the message. more_out = msg_->flags & ZMQ_MSG_MORE; // Push the message into the pipe. If there's no out pipe, just drop it. if (current_out) { bool ok = current_out->write (msg_); zmq_assert (ok); if (!more_out) { current_out->flush (); current_out = NULL; } } else { int rc = zmq_msg_close (msg_); zmq_assert (rc == 0); } // Detach the message from the data buffer. int rc = zmq_msg_init (msg_); zmq_assert (rc == 0); return 0;}
1. Select an output pipeline to send messages based on the identity.
2. Note that the message part of the identity prefix is not sent to the output pipeline, but uses check_write (1) to detect the pipeline.
3. Rep sends a message:
The message header will be written to the pipeline of reply from the analysis of received messages in rep.
Therefore, when rep is sent, it is OK to add frame 3 data to the reply pipeline.
Let's take a look at the source code:
Int zmq: rep_t: xsend (zmq_msg_t * MSG _, int flags _) {// if we are in the middle of processing ing a request, we cannot send reply. If (! Sending_reply) {errno = EFSM; Return-1;} bool more = (MSG _-> flags & zmq_msg_more); // push message to the reply pipe. int rc = xrep_t: xsend (MSG _, flags _); // send the message to reply pipe if (RC! = 0) return RC; // If the reply is complete flip the FSM back to request processing state. If (! More) sending_reply = false; return 0 ;}
The xrep_t: xsend (2) Section has seen it when Rep receives the message. It writes the message into the corresponding pipeline and flush the pipeline after all the messages are written.
xrep_t::xsend(2):... // Push the message into the pipe. If there's no out pipe, just drop it. if (current_out) { bool ok = current_out->write (msg_); zmq_assert (ok); if (!more_out) { current_out->flush (); current_out = NULL; } }...
4. req receive messages:
When receiving a message, req will receive the message. The first part of the message should be the empty message part, and then the subsequent message will be received and returned to caller.
The Code is as follows:
Int zmq: req_t: xrecv (zmq_msg_t * MSG _, int flags _) {// If request wasn't send, we can't wait for reply. If (! Receiving_reply) {errno = EFSM; Return-1 ;}// First part of the reply shocould be empty message part (stack bottom ). if (message_begins) {// empty message part processing int rc = xreq_t: xrecv (MSG _, flags _); If (RC! = 0) return RC; zmq_assert (MSG _-> flags & zmq_msg_more); zmq_assert (zmq_msg_size (MSG _) = 0); message_begins = false;} int rc = xreq_t :: xrecv (MSG _, flags _); // receives the message if (RC! = 0) return RC; // If the reply is fully received, flip the FSM into request-sending State. If (! (MSG _-> flags & zmq_msg_more) {receiving_reply = false; message_begins = true;} return 0 ;}
Summary:
This article through a simple REQ-REP example and combined with the source code analysis of zeromq request-reply related procedures.
The xreq: xsend () and xreq: xrecv () functions are not mentioned here, but simply put, the load-balance function will be used to select a peer during sending, when receiving the request, fair-Queueing is used to select a peer for first processing. These two policies are similar in concept and take turns. Later we will analyze the code implementation when talking about router and dealer, and you will find that zmq_dealer is implemented using xreq.
Next we will talk about router and dealer. Coming soon! If you are interested, contact me and learn together. Kaka11.chen@gmail.com