Fluent Python the 17th chapter uses the term object to handle concurrency

Source: Internet
Author: User

From Python 3.4, there are two classes named future in the standard library: Concurrent.futures.Future and
Asyncio. Future. These two classes have the same effect: two instances of the future class indicate that they may have completed or
Delay calculation that has not yet been completed by the user

One thing to keep in mind: Normally, you should not create a period, but only by the Concurrency framework
(concurrent.futures or Asyncio) instantiation. The reason for this is simple: The event indicates what will happen
, and the only way to make sure something happens is that the time of execution has been scheduled. Therefore, the
Concurrent.futures.Future instance is created only if you schedule a
to be processed by the Concurrent.futures.Executor subclass. For example, the parameter of the Executor.submit () method is an object that can be
called, and the method is called to schedule the incoming callable object and return a period. The
client code should not alter the state of the period, and the concurrency framework will change the
State of the period after the end of the delay calculation expressed by the term, and we cannot control when the calculation ends.
Both of these phases have the. Done () method, which does not block, and the return value is a Boolean value that indicates whether the callable object linked by
has been executed. The client code usually does not ask if the term is running and will wait for notification.
Therefore, the two future classes have. Add_done_callback () Method: This method has only one parameter, the class
type is a callable object, and the specified callable object is called when the period is finished.
In addition, there is a. Result () method. Called after the end of the term, this method works the same way in the two future class
: Returns the result of the callable object, or throws the exception that is thrown when the callable object is executed.
However, if the period does not end, the behavior of the result method differs greatly in the two future classes. For
Concurrency.futures.Future instances, calling the F.result () method blocks the caller's
thread until a result can be returned. At this point, the result method can receive an optional timeout parameter, which throws a Timeouterror exception if the period is not finished running at the time that the
is defined. Read 18.1. Section 1 You will send
now, Asyncio. The Future.result method does not support setting the time-out period, and the result of acquiring a period in that library is the most
good to use the yield from structure. However, you cannot do this for concurrency.futures.Future instances.

There are several functions in these two libraries that return to the period, and others use the term to implement their own in a way that users can easily understand.
Body Using the Executor.map method in 17-3 belongs to the latter: The return value is an iterator, the iterator's
The __next__ method invokes the result method for each period, so we get the results of each period, not
Phase itself.

Blocking I/O and Gil

The CPython interpreter itself is not thread-safe, so there is a global interpreter lock (GIL) that allows only the use of
A thread executes the Python bytecode. Therefore, a Python process typically cannot use multiple CPU cores simultaneously
Heart

All blocking I/O functions in the Python standard library release the GIL, allowing other lines to be Cheng
Yes. The Time.sleep () function also releases the GIL. Therefore, although there are gil,python threads that can still be in the I/O
Intensive applications.

How to use the Concurrent.futures module in CPU-intensive jobs to easily bypass the GIL

Documentation for the Concurrent.futures module
(https://docs.python.org/3/library/concurrent.futures.html) subtitle is "Launching parallel
Tasks "(performing parallel tasks). This module implements a true parallel computation because it uses the
The Processpoolexecutor class assigns work to multiple Python processes. So if you need to do the CPU
Intensive processing, use this module to bypass the GIL and take advantage of all available CPU cores.

The Threadpoolexecutor.__init__ method requires the Max_workers parameter, which specifies thread pool threads
The quantity. In the Processpoolexecutor class, that parameter is optional and is not used in most cases
--The default value is the number of CPUs returned by the Os.cpu_count () function. This makes sense because the CPU-dense
It is not possible to require the use of more than the number of CPUs in a set-type processing. For I/O intensive processing, you can
To use 10, 100, or 1000 threads in a threadpoolexecutor instance; the best number of threads
It depends on what is being done and how much memory is available, so you need to test carefully to find the best number of threads.

 fromTimeImportSleep, Strftime fromConcurrentImportFuturesdefDisplay (*args):Print(Strftime ('[%h:%m:%s]'), end=' ')    Print(*args)defloiter (n): Msg='{}loiter{}: Doing nothing for {}s ...'Display (Msg.format ('\ t'*n , N, N)) Sleep (n) msg='{}loiter ({}): Done.'Display (Msg.format ('\ t'*N, N)) returnn * 10defMain (): Display ('Script starting.') Executor= Futures. Threadpoolexecutor (max_workers=3) Results= Executor.map (loiter, Range (5) ) display ('Result:', results) display ('Waiting for individual results:')     forI, resultinchEnumerate (Results): Display ('Result {}: {}'. Format (i, result)) main ()

The Executor.map function is easy to use, but one of the features may or may not be useful, depending on the
The order in which this function returns results is consistent with the order in which the calls begin. If the first call produces a result when 10
Seconds, while the other calls take only 1 seconds, the code blocks for 10 seconds, and gets the first one of the generator outputs returned by the map method
Results. After this, the subsequent results are not blocked because the subsequent calls have ended. If you have to wait until you receive
After all the results are processed, this behavior is fine, but it is usually preferable that, regardless of the order of submission,
Get as long as you have results. To this end, the Executor.submit method and the futures.as_completed letter
Number combined to use

Threads and multi-process alternatives
Python supports threading from version 0.9.8 (1993), and Concurrent.futures is just the
newest way to use threads. Python 3 discards the original thread module and replaces it with the Advanced Threading Module
(https://docs.python.org/3/library/threading.html). If
Futures. The Threadpoolexecutor class is not flexible enough for a job, and you might want to use components in the threading
block (such as Thread, Lock, Semaphore, and so on) to make your own scenarios, such as using the Queue Module
Block (https ://docs.python.org/3/library/queue.html) Creates a thread-safe queue that passes the number of
data between threads. Futures. These components are already encapsulated by the Threadpoolexecutor class. The
Threading module has existed since Python 1.5.1 (1998), but some people still continue to use the old thread module. Python 3 Renames the
This document by the Linux commune www.linuxidc.com
Thread module to _thread to emphasize that this is a low-level implementation and should not be used in application code.
for CPU-intensive work, start multiple processes to circumvent the GIL. The simplest way to create multiple processes is to
use futures. Processpoolexecutor class. However, as before, if you use the scene more complex, you need to
more advanced tools. The API for the Multiprocessing module
(https://docs.python.org/3/library/multiprocessing.html) is similar to the threading module, not
Over the job to be processed by multiple processes. For a simple program, you can use the Multiprocessing module instead of the
Threading module, a small number of changes can be. However, the multiprocessing module also addresses the biggest challenge for the collaboration process to encounter
: passing data between processes.

To summarize, there is a Gil lock in Python, which leads to the inability and normal use of threads, but for IO-intensive jobs, because all plug-in I/O functions in the Python standard library release the Gil and allow other threads to run, it does not hinder the use of multithreading. For CPU-intensive jobs, you can use the Concurrent.futures module to bypass the Gil.

Fluent Python the 17th chapter uses the term object to handle concurrency

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.