Python concurrent programming: Blocking IO

Source: Internet
Author: User
Tags connection pooling

Blocking io (blocking IO)

In Linux, all sockets are blocking by default, and a typical read operation flow is probably this:

When the user process invokes the RECVFROM system call, Kernel begins the first phase of IO: Preparing the data. For network IO, there are times when the data has not arrived at first (for example, a full UDP packet has not been received), and kernel waits for enough data to arrive.

And on the user process side, the whole process will be blocked, when kernel wait until the data is ready, it will copy the data from the kernel to the user memory and then kernel return the results, the user process to remove the state of the block, re-run up

  Therefore, the blocking IO is characterized by the block of two phases of IO execution (two phases of waiting for data and copying data).

Almost all programmers first come into contact with the network programming from Listen (), send (), recv () and other interfaces to start, using these interfaces can be very convenient to build server/client model. However, most socket interfaces are blocking types. such as PS: the so-called blocking interface refers to a system call (typically an IO interface) does not return the call result and keeps the current thread blocking only returns if the system call obtains a result or a time-out error.

  

Virtually all IO interfaces (including the socket interface) are blocking, unless specifically specified. This poses a big problem for network programming. If the thread is blocked while calling recv (1024), the thread will not be able to perform any operations or respond to any network requests during this time.

A simple solution:

Use multithreading (or multiple processes) on the server side. The purpose of multithreading (or multi-process) is to have separate threads (or processes) for each connection, so that blocking of any one connection does not affect other connections.

  The problem with this scenario is:

Open multi-process or multi-threaded way, in the face of simultaneous response to hundreds of connection requests, regardless of multi-threading or multi-process will seriously occupy the system resources, reduce the system to the external response efficiency, and the thread and process itself more easily into the suspended animation state

 Improvement Program:

Many programmers might consider using a "thread pool" or "Connection pool." The thread pool is designed to reduce the frequency of creating and destroying threads, maintaining a reasonable number of threads, and allowing idle threads to re-assume new execution tasks. Connection pooling maintains a connected cache pool, reusing existing connections as much as possible, reducing the frequency with which connections are created and closed. Both of these technologies can be used to reduce system overhead and are widely used in many large systems, such as WebSphere, Tomcat, and various databases.

  There are problems with the post-improvement scheme:

The "Thread pool" and "connection pooling" techniques are only to some extent mitigated by the frequent invocation of the IO interface for resource consumption. And the "pool" is always the upper limit, when the request greatly exceeds the upper limit, the "pool" composed of the system response to the outside world is no better than the time when there is no pool. So using the pool must consider the size of the response it faces and adjust the size of the pool to scale

  The "thread pool" or "Connection pool" may alleviate some of the stress, but not all of them, in response to the thousands or even thousands of client requests that may appear in the previous example. In short, multithreaded models can easily and efficiently solve small-scale service requests, but in the face of large-scale service requests, multithreading model will encounter bottlenecks, you can use non-blocking interface to try to solve the problem.

Practice:

Service side:

From socket Import *from threading Import Threaddef Communicate (conn): While    True:        try:            data = CONN.RECV ( 1024x768)            If not data:                break            Conn.send (Data.upper ())        except Connectionreseterror:    break Conn.close () Server = socket (af_inet, Sock_stream) server.bind ((' 127.0.0.1 ', 8080)) Server.listen (5) while True:    Print (' starting ... ')    conn,addr = server.accept ()    print (addr)    t = Thread (target=communicate, args= ( conn,))    T.start () server.close ()

Client:

From socket Import *client = socket (af_inet, Sock_stream) client.connect ((' 127.0.0.1 ', 8080)) while True:    msg = input ("Please enter data:"). Strip ()    if not msg:        continue    client.send (Msg.encode (' Utf-8 '))    data = CLIENT.RECV ( 1024x768)    Print (Data.decode (' Utf-8 ')) Client.close ()

  

  

Python concurrent programming: Blocking IO

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.