Inter-process communication (6)

Source: Internet
Author: User

Read and Write to FIFO

Using the o_nonblock mode will affect read and write calls on the FIFO.

The read call on an empty block FIFO (for example, opened with no o_nonblock) will wait until data can be read. On the contrary, if you call read on a non-blocking and no data FIFO, 0 bytes will be returned.

Write calls on a fully blocked FIFO will wait until data can be written. The write call on a FIFO instance that cannot accept all data to be written will:
If the request pipe_buf is in bytes or smaller than and the data cannot be written, the request will fail.
If the request is larger than pipe_buf bytes, some data is written, and the actual written bytes are returned. The value is 0.

The size of FIFO is an important consideration. There is a system-related limit on how much data can be stored in the FIFO. This is the value of # define pipe_buf, which is usually defined in limits. h. In Linux and many other UNIX-like systems, this value is usually 4096 bytes, but in some systems, this value can be as small as 512 bytes. By default, pipe_buf or fewer bytes are written to a FIFO Instance opened in o_wronly mode. The result is all or no write.

Although this restriction is not very important for a single FIFO write end and a single FIFO read end, however, when a FIFO is used to allow multiple programs to send requests to a FIFO reader, it is very common. If multiple different programs attempt to write data to the FIFO at the same time, the data blocks of different programs are staggered. Each write operation should be atomic. What do you think?

However, if we ensure that all our write requests are sent to the blocked FIFO, And the size is smaller than pipe_buf bytes, the system will ensure that the data will not be staggered. Generally, it is a good idea to strictly limit the size of data blocks sent in pipe_buf bytes through FIFO unless we only use one read end and one write end.

Test-process interaction using FIFO

To demonstrate how unrelated processes use a famous Pipeline for interaction, we need two separate programs, o3.c and o4.c.

1. The first program is our producer program. If necessary, the user creates an MPS queue and writes data to it as soon as possible.

Note: for demonstration purposes, we don't mind what the data is, so we don't initialize a buffer zone.

# Include <stdio. h>
# Include <stdlib. h>
# Include <string. h>
# Include <unistd. h>
# Include <fcntl. h>
# Include <limits. h>
# Include <sys/types. h>
# Include <sys/STAT. h>

# Define o_name "/tmp/my_fifo"
# Define buffer_size pipe_buf
# Define ten_meg (1024*1024*10)

Int main ()
{
Int pipe_fd;
Int res;
Int open_mode = o_wronly;
Int bytes_sent = 0;
Char buffer [buffer_size + 1];

If (access (export o_name, f_ OK) =-1)
{
Res = mkfifo (Bytes o_name, 0777 );
If (res! = 0)
{
Fprintf (stderr, "cocould not create FIFO % s/n", stored o_name );
Exit (exit_failure );
}
}

Printf ("process % d opening FIFO o_wronly/N", getpid ());
Pipe_fd = open (kerbero_name, open_mode );
Printf ("process % d result % d/N", getpid (), pipe_fd );

If (pipe_fd! =-1)
{
While (bytes_sent <ten_meg)
{
Res = write (pipe_fd, buffer, buffer_size );
If (RES =-1)
{
Fprintf (stderr, "write error on pipe/N ");
Exit (exit_failure );
}
Bytes_sent + = res;
}
(Void) Close (pipe_fd );
}
Else
{
Exit (exit_failure );
}

Printf ("process % d finished/N", getpid ());
Exit (exit_success );
}

2. Our second program, consumer, is relatively simple. It reads and discards data from the FIFO.

# Include <stdio. h>
# Include <stdlib. h>
# Include <string. h>
# Include <unistd. h>
# Include <fcntl. h>
# Include <limits. h>
# Include <sys/types. h>
# Include <sys/STAT. h>

# Define o_name "/tmp/my_fifo"
# Define buffer_size pipe_buf

Int main ()
{
Int pipe_fd;
Int res;

Int open_mode = o_rdonly;
Char buffer [buffer_size + 1];
Int bytes_read = 0;

Memset (buffer, '/0', sizeof (buffer ));

Printf ("process % d opening FIFO o_rdonly/N", getpid ());
Pipe_fd = open (kerbero_name, open_mode );
Printf ("process % d result % d/N", getpid (), pipe_fd );

If (pipe_fd! =-1)
{
Do
{
Res = read (pipe_fd, buffer, buffer_size );
Bytes_read + = res;
} While (RES> 0 );
(Void) Close (pipe_fd );
}
Else
{
Exit (exit_failure );
}
Printf ("process % d finished, % d bytes read/N", getpid (), bytes_read );
Exit (exit_success );
}

When we run these programs at the same time and use the time command to calculate the read time, we will get the following output:

$./O3 O3 &
[1] 375
Process 375 opening FIFO o_wronly
$ Time./fifo4
Process 377 opening FIFO o_rdonly
Process 375 result 3
Process 377 result 3
Process 375 finished
Process 377 finished, 10485760 bytes read
Real 0m0. 053 s
User 0m0. 020 s
Sys 0m0. 040 s
[1] + done fifo3

Working Principle

Both programs use FIFO in blocking mode. We first start fifo3, which will be blocked and wait for a read end to open the FIFO. When ipvo4 is started, the writing end stops blocking and starts to write data to the pipeline. At the same time, the reader starts to read data from the MPs queue.

The output of the time command shows that the reader only runs for seconds and reads 10 MB of data from the process. This shows that pipelines are an effective way to exchange data between programs.

Advanced topic: Use a FIFO Client/Server

As our last discussion about FIFO, we will explore how to use a famous Pipeline to build a very simple client/server program. We want a server process to accept the request, process the request, and return the result data to the requester: client.

We will allow multiple client processes to send data to the server. For a simple purpose, we think that the data to be processed can be divided into multiple data blocks, and the size of each data block is smaller than pipe_buf. Of course, we can implement this system in multiple ways, but to demonstrate how to use a famous Pipeline, we only have to consider one method of pipeline.

Because the server processes only one information block at a time, it is reasonable to have a FIFO which is read by the server and can be written by each client. By enabling FIFO in blocking mode, the server and client can automatically block as needed.

It is very difficult to return the processed data to the client. We will consider the second pipeline. For the returned data, each client has a pipeline. By sending the client process identifier (PID) to the server as raw data, both parties can use this identifier to generate a unique name for the returned pipeline.

Test-client/server program example.

1 first, we need a header file, client. H, which defines the general data definitions required by the client and server, and also contains the required system header files.

# Include <stdio. h>
# Include <stdlib. h>
# Include <string. h>
# Include <unistd. h>
# Include <fcntl. h>
# Include <limits. h>
# Include <sys/types. h>
# Include <sys/STAT. h>

# Define server_fifo_name "/tmp/serv_fifo"
# Define client_kerbero_name "/tmp/CLI _ % d_fifo"

# Define buffer_size 20

Struct data_to_pass_st
{
Pid_t client_pid;
Char some_data [BUFFER_SIZE-1];
};

2. Now let's take a look at the server program, server. C. In this section, we create and open the server pipeline and set it to read-only and blocking mode. After hibernation (for demonstration purposes), the server reads the data sent by the client, which has the data_to_pass_st structure.

# Include "client. H"
# Include <ctype. h>

Int main ()
{
Int server_fifo_fd, client_fifo_fd;
Struct data_to_pass_st my_data;
Int read_res;
Char client_fifo [256];
Char * tmp_char_ptr;

Mkfifo (server_1_o_name, 0777 );
Server_incluo_fd = open (server_incluo_name, o_rdonly );
If (server_fifo_fd =-1)
{
Fprintf (stderr, "server FIFO failure/N ");
Exit (exit_failure );
}

Sleep (0 );

Do
{
Read_res = read (server_fifo_fd, & my_data, sizeof (my_data ));
If (read_res> 0)
{

3. In the next step, we perform some processing on the data just read by the client: we convert all the characters in some_data into uppercase and combine client_1_o_name with the received client_pid.

Tmp_char_ptr = my_data.some_data;
While (* tmp_char_ptr)
{
* Tmp_char_ptr = toupper (* tmp_char_ptr );
Tmp_char_ptr ++;
}
Sprintf (client_fifo, client_1_o_name, my_data.client_pid );

4. Then, we send the processed data back to open the client pipeline in read-only blocking mode. Finally, we disable the server FIFO by closing the file and deleting the FIFO.

Client_1_o_fd = open (client_fifo, o_wronly );
If (client_fifo_fd! =-1)
{
Write (client_fifo_fd, & my_data, sizeof (my_data ));
Close (client_fifo_fd );
}
}
} While (read_res> 0 );
Close (server_fifo_fd );
Unlink (server_kerbero_name );
Exit (exit_success );
}

5. The following is the client. C. If the server FIFO already exists as a file, the first part of the program will open the server FIFO. Then obtain its own process ID, which will constitute the data sent to the server. The client FIFO is created and ready for the next part.

# Include <client. h>
# Include <ctype. h>

Int main ()
{
Int server_fifo_fd, client_fifo_fd;
Struct data_to_pass_st my_data;
Int times_to_send;
Char client_fifo [256];

Server_incluo_fd = open (server_incluo_name, o_wronly );
If (server_fifo_fd =-1)
{
Fprintf (stderr, "Sorry, no server/N ");
Exit (exit_failure );
}

My_data.client_pid = getpid ();
Sprintf (client_fifo, client_1_o_name, my_data.client_pid );
If (mkfifo (client_first, 0777) =-1)
{
Fprintf (stderr, "Sorry, can not make % s/n", client_fifo );
Exit (exit_failure );
}
6. For each of these five cycles, client data is sent to the server. Then the client FIFO is opened and the returned data is read. Finally, the server FIFO is disabled and the client FIFO is removed from the memory.

For (times_to_send = 0; times_to_send <5; times_to_send ++)
{
Sprintf (my_data.some_data, "hello from % d", my_data.client_pid );
Printf ("% d send % s,", my_data.client_pid, my_data.some_data );
Write (server_fifo_fd, & my_data, sizeof (my_data ));
Client_1_o_fd = open (client_fifo, o_rdonly );
If (client_fifo_fd! =-1)
{
If (read (client_fifo_fd, & my_data, sizeof (my_data)> 0)
{
Printf ("Received: % s/n", my_data.some_data );
}
Close (client_fifo_fd );
}
}
Close (server_fifo_fd );
Unlink (client_fifo );
Exit (exit_success );
}

To test this program, we need to run a server copy and multiple clients. To make them start at the same time, we use the following shell command:

$ Server &
$ For I in 1 2 3 4 5
Do
Client &
Done
$

This will start a server process and five Client Processes. The output from the client is shown below:

531 sent hello from 531, received: hello from 531
532 sent hello from 532, received: hello from 532
529 sent hello from 529, received: hello from 529
530 sent hello from 530, received: hello from 530
531 sent hello from 531, received: hello from 531
532 sent hello from 532, received: hello from 532

As we can see, different clients should be staggered, and each client will get the correct data to be returned to it for processing. Note that these requests are randomly staggered, and the order of the received requests varies between different machines and between different running of the same machine.

Working Principle

Now we will explain the interaction between the client queue and server operations. Some content has not been involved so far.

The server creates a FIFO instance in read-only and blocking modes. It completes this operation and waits for the first client to open the same FIFO for writing. At the same time, the server process unblocks and performs sleep calls. Therefore, the write operations on the client are excluded. (In a real program, the sleep call should be removed; here we just use it to demonstrate that the program with multiple concurrent clients can operate correctly)

At the same time, after the client opens the FIFO, it creates its own unique, well-known FIFO for reading data returned by the server. The client then sends the data to the server (if the MPs queue is full or the server is still sleeping, it blocks itself), blocks the read operation on its own FIFO, and waits for a reply.

Once the data sent by the client is received, the server processes the data, opens the client pipeline for writing, and sends the data back to the client, which causes the client to unblocking. After the client is blocked, it can read the data sent by the server from its MPs queue.

The whole process repeats until the last client closes the server pipeline, causing the server's read call to fail because there is no process to enable the server pipeline for writing. If this is a real server process that needs to wait for more clients, we need to modify it using any of the following methods:

Open a file descriptor to its own server pipeline, so that the read operation is always blocked rather than returning 0.
When read returns 0 bytes, it closes and re-opens the server pipeline, so that the server process will block waiting for the client after the open call, just as when it was started for the first time.

These technologies are demonstrated in a CD database program that is rewritten using a famous Pipeline.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.