Starting with the API to understand QNX-message delivery

Source: Internet
Author: User
Tags sprintf strcmp

As we all know, QNX is a micro-core operating system that relies on interprocess communication to achieve the entire system function. So specifically to write a program, exactly how this communication is done? This chapter is specific to the bottom of the message delivery API. Message passing is done through the kernel, so the so-called API, which is actually the lowest-level kernel call. It should be noted that when you really write programs on QNX, you rarely use these APIs directly, but instead use higher-level APIs, but it should be helpful to know that these underlying APIs are useful for understanding the interfaces that are built on these APIs in the future.

Channel and connection (connect)
Message delivery is based on the server and client mode, so how can clients communicate with the server side? The simplest, of course, is to specify the process number of each other. To send a party, add the message to a header, tell the kernel "send this message to PID 12345" on the line. In fact, this is also the practice of QNX4 time. But after QNX6 started to fully support POSIX threads, this approach doesn't seem to fit. What if the server has two threads and different services? Or you would say, "Send this message to PID 12345 TID 3" on the line. But what if a service is not serviced by a single thread, but by a set of threads? To this end, QNX6 abstracts out the concept of "channel". A channel is the entrance to a service, and it is the server's own business as to how many threads the channel actually serves. If a server has multiple services, it can also open multiple channels. Clients, however, need to establish a connection (Connection) before sending a message to the channel, and then send the message out on the connection. This same client, if necessary, can establish multiple connections to the same channel. So, the general process of preparing for communication is this:

Server

Code: Select All

Channelid = Channelcreate (Flags);

Client

Code: Select All

ConnectionID = Connectattach (Node, Pid, Chid, Index, Flag);

The server side does not have to explain, the client to establish a connection, it needs node, this is the machine number. If this value determines which machine is in the network (transparent distribution processing), if the client is in the same machine as the server, the number is 0, or nd_local_node;pid is the process number of the server, and Chid is the call to the Channelcreate () The channel number that was obtained after the Index and flag are discussed later. Basically the client is the same as "node this machine, PID this process, Chid channel" make a connection. Once you have a connection, you are ready to deliver the message.
The termination of the connection is Connectdetach (), and the end of the channel is Channeldestroy (). However, the general server is a long-term existence, not much need Channeldestroy () time.

send, Receive, and Answer (Reply)
QNX's messaging, unlike our traditional common interprocess communication, is a "synchronous" message delivery. A message is delivered, received and answered in three parts, the so-called SRR process. Specifically, the client on the connection "send" message, once sent, the client will be blocked, the server will receive the message, processing, and finally, the processing results "answer" to the client; The client's blocking state will be lifted only after the server "responds". This synchronization process, not only guarantees the client and server-side timing, but also greatly simplifies the programming. Specifically, this is the case with the API.
Server

Code: Select All

Receiveid = Msgreceive (Channelid, Receivebuffer, Receivebuflength, &msginfo);
(... Check the message in buffer to process ...)
Msgreply (Rceeiveid, Replystatus, Replybuf, Replylen);

Client

Code: Select All

Msgsend (ConnectionID, SendBuf, Sendlen, Replybuf, Replylen);
(... This thread is suspended by the OS ...)
(... When the server msgreply (), the OS unlocks the blocking state of the thread, and the client can check its own receivebuf to see the results of the answer ...)

        server side on the channel to receive, after processing the answer, the client is sent on the connection, pay attention to the transmission, The client also provides a buffer for receiving the response. If you are careful, perhaps you will ask, the server side of the msgreceive () and the client's Msgsend () is not synchronized, will there be problems? For example, if Msgsend (), the server is not in msgreceive (), what will happen? The answer is that the OS will still suspend the sending thread, sending the thread from the execution state (RUNNING) to the "Send Block" state (send block) until the server comes to Msgreceive (), and then copy the SendBuf into the Receivebuffer. The status of the sending thread becomes "reply blocking" (REPLY block).  
        Similarly, if the server calls Msgreceive (), there is no client, and the server thread is suspended, entering a "receive blocking" state ( RECEIVE BLOCK).  
When answering, you can also use Msgerror () to tell the sender that an error has occurred. Because Msgreply () can also return a state, you may ask what is the difference between the two? Msgreply (Rcvid, EINVAL, 0, 0); The result is that the return value of the function of Msgsend () is (EINVAL), and Msgerror (Rcvid, EINVAL); The result is msgsend () returns-1, And errno was set to Einval.

data area and Iov
In addition to using a linear buffer for message delivery, it provides the "pooling" of data with iov_t for ease of use. That is, you can transfer several pieces of data at a time. It looks like the picture below. Although in the client blue header with the red Databuf is two non-contiguous memory, but passed to the server side of the Receivebuffer, is continuous. In other words, in the server side, to get the original DATABUF data, only need (Receivebuffer + sizeof (header)) on it. (Be aware of the data structure on it)
Client

Code: Select All

Setiov (&iov[0], &header, sizeof (header));
Setiov (&iov[1], databuf, datalen);
Msgsendvs (ConnectionID, Iov, 2, REPLYBF, Replylen);

"Header" and "Databuf" are discontinuous two pieces of data. After receiving the server,the "header" and "Databuf" are continuously present in the Receivebuffer.

Code: Select All

Receiveid = Msgreceive (Channelid, Receivebuffer, Receivebuflength, &msginfo);

Header = (struct header *) Receivebuffer;
DATABUF = (char *) ((char *) header + sizeof (*header));

Example
Well, with these basic functions (kernel calls), we can write a client and a server side for the most basic communication.
Service: This server, after the good channel, will receive information from the channel. If the message is the string "Hello", the server answers a "world" string. If the received letter is the string "Ni Hao" then it will answer "Zhong Guo", and any other message will answer an error with Msgerror ().

Code: Select All

$ cat Simple_server.c

Simple server
#include <errno.h>
#include <stdio.h>
#include <string.h>
#include <sys/neutrino.h>
int main ()
{
int Chid, rcvid, status;
Char buf[128];

if ((Chid = channelcreate (0)) = = =-1) {
Perror ("Channelcreate");
return-1;
}

printf ("Server is ready, PID =%d, Chid =%d\n", Getpid (), chid);

for (;;) {
if (Rcvid = Msgreceive (Chid, buf, sizeof (BUF), NULL)) = = =-1) {
Perror ("msgreceive");
return-1;
}

printf ("server:received '%s ' \ n", buf);

/* Based on what we receive, return some message */
if (strcmp (buf, "Hello") = = 0) {
Msgreply (rcvid, 0, "World", Strlen ("World") + 1);
} else if (strcmp (buf, "Ni Hao") = = 0) {
Msgreply (rcvid, 0, "Zhong Guo", strlen ("Zhong Guo") + 1);
} else {
Msgerror (Rcvid, EINVAL);
}
}

Channeldestroy (Chid);
return 0;
}

Client: The client establishes a connection to the server by the process number and channel number of the server obtained from the command line. Then send the server three times "Hello" and "Ni Hao" and check the return value. Finally send a "unknown" to see if Msgsend () will get an error back.

Code: Select All

$ cat Simple_client.c

Simple Client
#include <stdio.h>
#include <string.h>
#include <sys/neutrino.h>

int main (int argc, char **argv)
{
pid_t spid;
int Chid, coid, I;
Char buf[128];

if (ARGC < 3) {
fprintf (stderr, "Usage:simple_client <pid> <chid>\n");
return-1;
}

SPID = atoi (argv[1]);
Chid = Atoi (argv[2]);

if (coid = Connectattach (0, spid, chid, 0, 0)) = = = 1) {
Perror ("Connectattach");
return-1;
}

/* Sent 3 pairs of "Hello" and "Ni Hao" */
for (i = 0; i < 3; i++) {
sprintf (buf, "Hello");
printf ("client:sent '%s ' \ n", buf);
if (Msgsend (Coid, buf, strlen (BUF) + 1, buf, sizeof (BUF))! = 0) {
Perror ("Msgsend");
return-1;
}
printf ("client:returned '%s ' \ n", buf);

sprintf (buf, "Ni Hao");
printf ("client:sent '%s ' \ n", buf);
if (Msgsend (Coid, buf, strlen (BUF) + 1, buf, sizeof (BUF))! = 0) {
Perror ("Msgsend");
return-1;
}
printf ("client:returned '%s ' \ n", buf);
}

/* sent a bad message, see if we get an error */
sprintf (buf, "Unknown");
printf ("client:sent '%s ' \ n", buf);
if (Msgsend (Coid, buf, strlen (BUF) + 1, buf, sizeof (BUF))! = 0) {
Perror ("Msgsend");
return-1;
}

Connectdetach (coid);

return 0;
}

The result of the compiled execution is this:

Server:

Code: Select All

$./simple_server
Server is ready, PID = 36409378, Chid = 2
Server:received ' Hello '
Server:received ' Ni Hao '
Server:received ' Hello '
Server:received ' Ni Hao '
Server:received ' Hello '
Server:received ' Ni Hao '
Server:received ' Unknown '
Server:received "

Client:

Code: Select All

$./simple_client 36409378 2
Client:sent ' Hello '
client:returned ' World '
Client:sent ' Ni Hao '
client:returned ' Zhong Guo '
Client:sent ' Hello '
client:returned ' World '
Client:sent ' Ni Hao '
client:returned ' Zhong Guo '
Client:sent ' Hello '
client:returned ' World '
Client:sent ' Ni Hao '
client:returned ' Zhong Guo '
Client:sent ' Unknown '
Msgsend:invalid argument

Variable Message length
As can be seen from the above program, the essence of message delivery is to copy the data from one buffer to another buffer (another process). The question is, how do you determine the size of the buffer? In the above example, the server uses a 128-byte buffer, in case the client sends a message such as 512 bytes, is not the message delivery will be wrong?
The answer is that delivery is still successful, but only Sendbuffer's initial 128 bytes of data will be copied. The idea is that the server must find such a situation and try to get the complete data.
At Msgrecieve (), the fourth parameter is a struct _msg_info. The kernel will populate the structure while the message is being delivered, telling you to get some information. In this structure, "Msglen" tells you how many bytes you actually received in this message (in our case, 128), and "Srcmsglen" tells you how big the actual buffer of the sender is (in our case, 512). By comparing these two values, the server can determine if it has received all the data.
What if the server knows that more data has not been received? QNX provides a special function for Msgread (). The server side can use this function to "read" data from the send buffer. Msgread () basically tells the kernel to read a certain amount of data back, starting at a specified offset from the sending buffer. So this part of the server-side code is basically like this.

Code: Select All

int rcvid;
struct _msg_info info;
Char buf[128], *totalmsg;

...

Rcvid = msgreceive (Chid, buf, N, &info);
...
if (Info->srcmsglen > Info->msglen) {
    totalmsg = malloc (Info->srcmsglen);
    if (!totalmsg) {
        Msgerror (Rcvid, enomem);
        continue;
   } 
    memcpy (totalmsg, buf, +);
    if (Msgread (Rcvid, &totalmsg[128], info->srcmsglen-info->msglen) = =-1) {
        Msgerror (Rcvid, EINVAL);
        continue;
   }
} else {
    totalmsg = buf;
}

/* Now Totalmsg point to a full message, don ' t forget to free () it later on,
 * if Totalmsg is malloc () ' d Here
 */

You may ask, why is the message receiving end, the server can also read the client's data? This is because, as we mentioned in the beginning, QNX's message delivery is "synchronous". Do you remember? The client is blocked before the server-side "answer", or the client's send buffer remains there and does not change. (also open a thread to make this buffer mess or even free?) Of course. However, this is a bug in your client program)

Similar to this, sometimes the server needs to return a large amount of data to the client (for example, 1M). The server does not want malloc (1024 * 1024), then msgreply (), and then free (). (in embedded programs, often malloc ()/free () is not a good habit) then the server can also use a small fixed-length buffer, such as 16K, and then the data "a part of the write back to" the client's response buffer. It looks like the following. Remember to finally do a msgreply () to keep the client running.

Code: Select All

Char *buf[16 * 1024];
unsigned offset;

for (offset = 0; offset < 1024x768; offset + = 16 * 1024) {
/* Moving data into buffer */
Msgwrite (rcvid, Buffer, * 1024x768, offset);
}
/* 1MB returned, Reply () to let client go */
Msgreply (rcvid, 0, 0, 0);

Example
The following is the read () and write () functions in the C Library of QNX, with the previous foundation, which should be well understood. First, no matter how the FD is obtained, as long as the understanding of FD is Connectattach () to add the connection number on it. Although read () obtains data from the server, write () outputs the data to the server, but essentially, they make a request to the server that is answered by the server. For write (), this is a io_write_t, a Msgwritev () sends the request to the server with the data to be passed, and for read (), the request is encapsulated in io_read_t, and Msgsend () passes the request to the server. The result buffer for read () is the response buffer, which is filled in by the server msgreply ().
Read ():

#include <unistd.h>
#include <sys/iomsg.h>
ssize_t Read (int fd, void *buff, size_t nbytes) {
io_read_t msg;
Msg.i.type = _io_read;
Msg.i.combine_len = sizeof msg.i;
Msg.i.nbytes = nbytes;
Msg.i.xtype = _io_xtype_none;
Msg.i.zero = 0;
Return Msgsend (FD, &MSG.I, sizeof msg.i, Buff, nbytes);
}

#include <unistd.h>
#include <sys/iomsg.h>

ssize_t Write (int fd, const void *buff, size_t nbytes) {
io_write_t msg;
iov_t iov[2];

Msg.i.type = _io_write;
Msg.i.combine_len = sizeof msg.i;
Msg.i.xtype = _io_xtype_none;
Msg.i.nbytes = nbytes;
Msg.i.zero = 0;
Setiov (Iov + 0, &msg.i, sizeof msg.i);
Setiov (Iov + 1, buff, nbytes);
Return Msgsendv (FD, Iov, 2, 0, 0);
}

How should the server side handle it? Think of Msgread ()/msgwrite (), it's not hard to imagine how the server side works.

pulse  
       pulse is more like a short message, is also sent on the connection. The most important feature of a pulse is that it is asynchronous. The sender is not required to wait for the receiver to answer, and can proceed directly. However, this kind of asynchrony also brings a limitation to pulses. Pulse can carry a limited amount of data, only a 8-bit "code" field to distinguish between different pulses, and a 32-bit "value" field to carry data. The most important use of pulses is to make "notifications" (Notification). Not only is the user program, the kernel also generates a special "system pulse" to the user program to notify the occurrence of a particular situation. The reception of the
Pulse is relatively simple, if you know that there will be no other messages on the channel, only the pulse, you can use Msgreceivepulse () to receive only pulses, if the channel can receive the message, but also receive pulses, the direct use of msgreceive (), just to ensure that the receiving buffer (REVEIVEBUF) can tolerate at least the next pulse (sizeof struct _pulse). In the latter case, if msgreceive () returns a rcvid of 0, the representation receives a pulse and, conversely, a message is received. So, a server that receives both pulses and messages can be like this.

Union {
struct _pulse pulse;
Msg_header header;
} msgs;
if (Rcvid = Msgreceive (Chid, &msgs, sizeof (msgs), &info)) = =-1) {
Perror ("msgreceive");
Continue
}
if (Rcvid = = 0) {
Process_pulse (&msgs, &info);
} else {
Process_message (&msgs, &info);
}
The most direct way to send a pulse is msgsendpulse (). However, this function is usually used only in one process, and in the case where one thread wants to notify another thread. In a cross-process, this function is usually not used, but is used in the msgdeliverevent () that will be mentioned below.
In contrast to messaging, messaging is always done between processes. In other words, there is no case where a process sends data to the kernel. The pulse is different, in addition to the user process can send pulses, the kernel will also send a "system pulse" to the user process to notify the occurrence of an event.

direction of message passing with Msgdeliverevent ()
From the outset, QNX's messaging is customer, server-based. That is, the client always sends a request to the server and waits for a reply. In reality, however, the client and server side are not easily distinguishable. Some servers in order to handle the client's request, it is necessary to send messages to other servers, some clients need to get services from different servers, not to block on a particular server, and sometimes, the data between the two processes is flowing to each other, what should be done?
It may be argued that two processes communicate with one another. Each process builds its own channel, and then it makes a connection to each other's channel, so that, when needed, you can send a message directly to the other person through the connection. It's like a pipe or a socketpair. Please note that this design should be avoided in QNX messaging. Because it's easy to create a deadlock. A common scenario is this.
Process A:msgsend () to process B
Process B:msgreceive () received a message
Process B: Process the message and then Msgsend () to process a
Because process a is in a blocking state, it cannot receive and process B's request, so a will be in state_reply, and B will enter state_send because of Msgsend (), and two processes will be locked in each other. Of course, if both A and B use multi-threading, specifically with a thread msgreceive (), this situation may be avoided, but you have to make sure that the msgreceive () thread does not go to msgsend (), otherwise it will deadlock. When the program is simple, perhaps you have control, if the program becomes complex, or you write a library, how others use you completely uncontrolled, it is best not to use this design.
In QNX, the right approach is this way.
Client: Prepare a "notify event" (Notification event) and send it to the server with Msgsend (), meaning: "If xxx happens, please notify me with this event".
Server: After receiving this message, record the Rcvid, and the transmitted events, and then answer "OK, know."
Client: Because of the server's answer, the client is no longer blocked, you can do something else
Server: At some point, the "xxx condition" required by the client is satisfied, and the server calls Msgdeliverevent (Rcvid, event); To notify the client
Client: Receive notification, and then use Msgsend () to send off "xxx case data where?" ”
Server: Use Msgreply () to return data to the client
For specific examples, you can refer to the documentation for Msgdeliverevent ().

Pathname (path name)
Now recall our initial example of how the client and server get connected? The client needs the ND, PID, and Chid of the server to properly establish the connection with the server. In our case, we are letting the server display these few numbers and then passing it to the client on the command line at the start of the client. However, in a real system, the process is constantly starting, terminating, and the server and client start-up process can not be controlled, this method is obviously not feasible.
QNX's solution is to skillfully combine the "path name" with the "service channel" concept described above. Allows the server process to register a pathname, associated with the ND, PID, chid of the service channel. In this way, the client does not need to know the server's ND, PID, Chid, and as long as the request to connect the name of the service path. Specifically, Name_attach () is used to create a channel and register a name for the channel, while Name_open () is used to connect to the registered server channel, and the specific example can be found in the Name_attach () document, which is not repeated here.

Starting with the API to understand QNX-message delivery

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.