Multi-process, multithreaded, asynchronous, and coprocessor detailed _python of Python concurrent programming

Source: Internet
Author: User
Tags generator stdin ticket in python

Recently Learning Python concurrency, I made a summary of multiple processes, multithreading, asynchrony, and coprocessor.
One, multithreading

Multithreading is allowing multiple controls to exist within a process so that multiple functions are active at the same time, allowing multiple functions to run concurrently. Even a single CPU computer can switch between instructions from one thread at a time, resulting in multiple threads running concurrently.

Multithreading is equivalent to a concurrent (concunrrency) system. Concurrent systems typically perform multiple tasks at the same time. If multiple tasks can share resources, especially when writing a variable at the same time, you need to solve the problem of synchronization, such as multithreading train ticketing system: Two instructions, a command to check whether the ticket sold out, another instruction, multiple windows at the same time selling tickets, may appear to sell the ticket.

In concurrent cases, the order in which the instructions are executed is determined by the kernel. Within the same thread, the instructions are executed sequentially, but the instructions between the different threads are hard to say which one will be executed first. Therefore, the problem of multithreading synchronization should be considered. Synchronization (synchronization) means that only one thread is allowed to access a resource within a certain amount of time.

1. Thread Module

2. Threading Module
Threading. Thread to create one.

To judge whether there is more than a ticket and sell tickets, plus mutual exclusion lock, so that will not cause a line isconcurrently judge no more than a ticket, and the other thread on the execution of the ticket operation.

#! /usr/bin/python
#-* coding:utf-8-*
# __author__ = "Tyomcat"
import threading Import time
Import OS

def booth (TID):
  Global I
  global lock while
  True:
    lock.acquire ()
    if i!=0:
      i=i-1
      Print window: ", Tid,", remaining votes: ", I
      time.sleep (1)
    else:
      print" thread_id ", Tid," No more Tickets "
      Os._exit (0)
    lock.release ()
    time.sleep (1)

i =
lock=threading. Lock () for

K in range (Ten):

  New_thread = Threading. Thread (target=booth,args= (k,))
  New_thread.start ()

Second, the process (also known as micro-threading, fiber)

The coprocessor, unlike the preemptive scheduling of a thread, is a collaborative scheduling. The coprocessor is also single-threaded, but it allows Non-human code that was written in an asynchronous + callback mode to be written in a seemingly synchronized fashion.

1, the coprocessor can be implemented by the generator (generator) in Python.

First, there is a solid understanding of the generator and yield.

Call an ordinary Python function, typically starting at the first line of the function, ending with a return statement, an exception, or a function execution (or, optionally, returning none implicitly).

Once the function returns control to the caller, it means all ends. Sometimes you can create a function that produces a sequence to "save your own work," which is the generator (a function that uses the yield keyword).

The ability to "produce a sequence" is because the function does not return as usual. Return implicitly means that the function is returning control of the execution code to the place where the function is invoked. The implied meaning of "yield" is that the transfer of control is temporary and voluntary, and our function will reclaim control in the future.

Take a look at producer/consumer examples:

#! /usr/bin/python
#-* coding:utf-8-*
# __author__ = "Tyomcat"
import time
import sys
# producer
def Produce (L):
  i=0 while
  1:
    If I <:
      L.append (i)
      yield i
      i=i+1
      time.sleep (1)
    else:
      return   
# consumer
def consume (l):
  p = Produce (L) while
  1:
    try:
      P.next () While
      len (L) > 0:
        print l.pop ()
    except stopiteration:
      sys.exit (0)
If __name__ = "__main_ _ ":
  L = []
  consume (L)

When the program executes to the yield I of produce, it returns a generator and pauses execution, when we invoke P.next () in custom, and the program returns to produce I continue to execute, so l yield the element, Then we print l.pop () until P.next () throws a Stopiteration exception.

2. Stackless Python

3. Greenlet Module

The Greenlet implementation is second only to stackless Python, roughly one times slower than Stackless Python, which is close to an order of magnitude faster than other scenarios. In fact, Greenlet is not a real concurrency mechanism, but in the same thread, in the execution of different functions between the blocks switch between the implementation of "you run for a while, I run a meeting," and in the switch must specify when to switch and where to switch.

4. Eventlet Module

III. Multi-process
1, sub-process (subprocess package)

In Python, a subprocess package is used to fork a child process and run an external program.

The first OS module to consider when calling the system's commands. Operate with Os.system () and Os.popen (). However, these two commands are too simple to perform complex operations, such as providing input to a running command or reading the output of a command, judging the running state of the command, managing the parallelism of multiple commands, and so on. At this point, the Popen command in Subprocess can effectively accomplish the operations we need.

>>>import subprocess
>>>command_line=raw_input ()
ping-c www.baidu.com
>> >args=shlex.split (command_line)
>>>p=subprocess. Popen (args)

Using Subprocess. Pipe joins the inputs and outputs of multiple child processes together to form a pipe (pipe):

Import subprocess
child1 = subprocess. Popen (["LS", "-l"], stdout=subprocess. PIPE)
child2 = subprocess. Popen (["WC"], stdin=child1.stdout,stdout=subprocess. PIPE) Out
= Child2.communicate ()
print (out)

The communicate () method reads the data from the stdout and stderr and enters it into stdin.

2. Multi-process (multiprocessing package)

(1), multiprocessing package is a multi-process Management Pack in Python. With threading. Thread is similar, it can take advantage of multiprocessing. Process object to create a procedure.

Process pool can create multiple processes.

Apply_async (Func,args) takes a process from the process pool and executes Func,args as Func parameters. It returns a AsyncResult object that you can call the Get () method for the result.

The close () process pool no longer creates a new process

Join () The entire process in the pool of the process. The close () method must be called on pool before join.

#! /usr/bin/env python
#-*-coding:utf-8  -*-
# __author__ = "Tyomcat"
# "My Computer has 4 CPUs" from

Multiprocessing Import Pool
import OS, time

def long_time_task (name):
  print ' Run task%s ' (%s) ... '% (name, OS. Getpid ())
  start = Time.time ()
  time.sleep (3) End
  = Time.time ()
  print ' Task%s runs%0.2f. '% (n  Ame, (End-start))

if __name__== ' __main__ ':
  print ' Parent process%s. '% os.getpid ()
  p = Pool () for
  I In range (4):
    P.apply_async (Long_time_task, args= (i,))
  print ' Waiting to all subprocesses do ... '
  P.close ()
  p.join ()
  print ' All subprocesses done. '

(2), multi-process shared resources

Through shared memory and manager objects: Using a process as a server, establish a manager to truly store resources.

Other processes can be passed through parameters or access to the manager based on the address, and after the connection is established, the resources on the server are manipulated.

#! /usr/bin/env python
#-*-coding:utf-8  -*-
# __author__ = ' Tyomcat '

from multiprocessing import Queue, Pool
Import multiprocessing,time,random

def write (q): for

  value in [' A ', ' B ', ' C ', ' D ']:
    print ' put%s to queue! "% value
    q.put (value)
    Time.sleep (Random.random ())


def read (Q,lock): While
  True:
    Lock.acquire ()
    if not Q.empty ():
      value=q.get (True)
      print ' Get%s from Queue '% value
      time.sleep ( Random.random ())
    else:
      break
    lock.release ()

if __name__ = = "__main__":
  manager= Multiprocessing. Manager ()
  Q=manager. Queue ()
  p=pool ()
  Lock=manager. Lock ()
  Pw=p.apply_async (write,args= (q,))
  Pr=p.apply_async (read,args= (q,lock))
  p.close ()
  P.join ()
  print
  print "All data is written and read out"

Four, asynchronous

Whether it is a thread or a process, the use of synchronous system, when blocking, performance will be greatly reduced, can not fully utilize the CPU potential, waste of hardware investment, more importantly, the software module of the iron, tight coupling, can not be cut, not conducive to future expansion and change.

Whether it's a process or a thread, every time you block or switch, you need to get into a system call, let the CPU run the operating system scheduler, and then the scheduler decides which process (thread) to run. Additional locks are required between multiple threads in some access to mutually exclusive code.

The current popular asynchronous server is based on event-driven (such as Nginx).

In an asynchronous event-driven model, the operation that causes blocking to be converted into an asynchronous operation, the main thread is responsible for initiating the asynchronous operation and handling the result of the asynchronous operation. Because all blocking operations are converted to asynchronous operations, the main thread of the theory most of the time is to deal with the actual computing tasks, less multi-threaded scheduling time, so the performance of this model is usually better.

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.