Python process and thread __python

Source: Internet
Author: User
processes, Threads (threading, multiprocessing, Queue, subprocess) in Python processes and threads in Python

Learning knowledge, we should not only know it, or know why. You do it, you are more NB than others. Let's take a look at what processes and threads are. History of processes and Threads

We all know that computers are made up of hardware and software. The CPU in the hardware is the core of the computer, and it assumes all the tasks of the computer. Operating system is running on top of the hardware software, is the computer manager, it is responsible for resource management and allocation, task scheduling. A program is a software that runs on a system with some function, such as a browser, a music player, and so on. Every time you execute a program, you will complete a certain function, such as browser to help us Open the Web page, in order to ensure its independence, we need a special management and control of the execution of the data structure-process control block. A process is a process in which a program executes dynamically on a dataset. Process is generally composed of program, dataset, Process control block three parts. The program we write is used to describe what the process is going to accomplish and how to do it; the dataset is the resource that the program needs to use in the process of execution, the process control block is used to record the external characteristics of the process, to describe the process of execution change, the system can use it to control and manage the process, it is the only mark of the existence of the system aware process.

In the early operating system, the computer has only one core, the minimum unit of the Process Execution program, and the task scheduling adopts the preemptive mode of time slice rotation to process scheduling. Each process has a separate piece of memory that guarantees the isolation of the memory address space between processes. With the development of computer technology, the process has a lot of drawbacks, one is the process of creation, cancellation and switching overhead is relatively large, the second is due to symmetric multiprocessor (symmetric multiprocessor (symmetricalmulti-processing) also called SMP, refers to a computer on a collection of a set of processors (Multi-CPU), the sharing of memory subsystem and the bus structure between the CPUs, can satisfy multiple running units, and the multiple process parallel overhead is too large. The concept of threading is introduced at this time. A thread is also called a lightweight process, which is a basic CPU execution unit and the smallest unit of program execution, composed of thread IDs, program counters, register collections, and stacks. The introduction of threading reduces the overhead of concurrent execution of the program and improves the concurrency performance of the operating system. Threads have no system resources of their own and only have resources that are essential at run time. However, a thread can share other resources owned by the process with other threads that belong to the same process.

the relationship between a process and a thread

Threads are part of a process, threads run in process space, threads generated by the same process share the same memory space, and threads generated by the process are forced out and purged when the process exits. Threads can share all the resources owned by a process with other threads belonging to the same process, but essentially do not own system resources and have only a few essential information (such as program counters, a set of registers, and stacks) in operation. Threading Module

The threading module is built on top of the _thread module. The thread module processes and controls threads in a low-level, primitive way, and the threading module provides a more convenient API to handle threads by encapsulating thread two times.

Import Threading
Import time
 
def worker (num): "" The
    thread worker function
    : return:
    "" " Time.sleep (1)
    print ("Thread%d"% num) return to
 
i in range:
    t = Threading. Thread (target=worker,args= (i,), name= "t.%d"% i)
    T.start ()

Thread Method Description

T.start (): Activates the thread,

T.getname (): Gets the name of the thread

T.setname (): Set the name of the thread

T.name: Gets or sets the name of the thread

T.is_alive (): Determines whether the thread is active

T.isalive (): Determines whether the thread is active

T.setdaemon () is set to a background thread or foreground thread (default: False), and a Boolean value to set whether the thread is a daemon and must be used after the start () method is executed. If it is a background thread, during the main thread execution, background threads are also in progress, the main thread after the completion of the background thread, whether successful or not, are stopped; if it is a foreground thread, the main thread executes, the foreground threads are also in progress, after the main thread is finished, wait for the foreground thread to finish execution, the program stops

T.isdaemon (): Determining whether a daemon is a thread

T.ident: Gets the identifier for the thread. The thread identifier is a non-0 integer and returns none only if the property is valid after the start () method is invoked.

T.join (): Executes each thread one at a time, and continues execution after execution, which makes multithreading meaningless

T.run (): The Thread object's Run method lock threading is automatically executed by the CPU when it is dispatched . Rlock and Threading.lock

When we use a thread to manipulate the data, if multiple threads modify a data at the same time, unpredictable results may occur, and the concept of locks is introduced in order to ensure the accuracy of the data.

Example: Suppose that all elements of List A are 0, when a thread prints all of the elements of a list back and forth, and the other thread modifies the element of the list from the back to 1, the element of the list is 0 and part 1, which results in inconsistent data. The presence of the lock solves the problem.

Import Threading
Import time
 
globals_num = 0
 
lock = Threading. Rlock ()
 
def Func ():
    lock.acquire ()  # get lock 
    global globals_num
    globals_num + + 1
    time.sleep (1)
    print (globals_num)
    lock.release ()  # release lock for 
 
I in Range (Ten):
    t = Threading. Thread (Target=func)
    T.start ()
Threading. The difference between Rlock and Threading.lock

Rlock allows multiple acquire in the same thread. But lock does not allow this situation. If you use Rlock, then acquire and release must appear in pairs, that is, call the n times acquire, you must call the N times to truly free up the locks occupied.

Import Threading
Lock = Threading. Lock () #Lock对象 Lock.acquire () is #产生了死琐 ()
lock.acquire  .
lock.release ()
lock.release ()
Import threading
Rlock = Threading. Rlock ()  #RLock对象 rlock.acquire () rlock.acquire ()    #在同一线程内, the program does not block.
rlock.release ()
rlock.release ()
Threading. Event

Event is one of the most mechanism of communication between threads: one thread sends an event signal, while the other threads wait for the signal. Used by the main thread to control the execution of other threads. Events manage a flag, this flag can be set to true using Set () or reset to false,wait () using clear () to block, before flag is true. Flag defaults to False. Event.wait ([timeout]): Blocks the thread until the event object's internal identity bit is set to TRUE or timeout (if a parameter timeout is provided). Event.set (): Sets the identity bit to Ture event.clear (): Sets the identity partner to false. Event.isset (): Determines whether the identity bit is ture.

Import Threading
 
def Do (event):
    print (' start ')
    event.wait ()
    print (' Execute ')
 
event_obj = Threading. Event () for
I in range:
    t = Threading. Thread (Target=do, args= (Event_obj,))
    T.start () event_obj.clear () inp
= input (' input: ')
if InP = ' True ':
    Event_obj.set ()

When the thread executes, if flag is false, the thread blocks, and when flag is true, the thread does not block. It provides both local and remote concurrency.

Threading. Condition:

A condition variable is always associated with some type of lock, which can be used by default or create one, which is useful when several condition variables must be shared and the same lock. Locks are part of the Conditon object: There is no need to track separately.

The condition variable obeys the context Management protocol: the contact with the lock can be obtained before the WITH statement block is closed. Acquire () and release () invoke the appropriate method associated with the lock.

Other methods associated with the lock must be invoked, and the Wait () method frees the lock and blocks until another thread wakes it up using notify () or Notify_all (). Once awakened, wait () will regain the lock and return, the condition class implements a Conditon variable. This Conditiaon variable allows one or more threads to wait until they are notified by another thread. If the lock parameter is given a non-empty value, then he must be a lock or Rlock object, which is used to make the underlying lock. Otherwise, a new Rlock object is created to do the underlying lock. Wait (Timeout=none): Waits for the notification, or waits for the set timeout time. When this wait () method is invoked, if the thread that invoked it does not get a lock, a RuntimeError exception is thrown. After the lock is released, Wati () is blocked until another process called the same condition is woken up with notify () or Notify_all (). Wait () can also specify a time-out.

If there is a waiting thread, the Notify () method wakes up a thread waiting for the Conditon variable. Notify_all () Wakes all threads that are waiting for the conditon variable.

Note: Notify () and Notify_all () do not release the lock, which means that the thread will not immediately return their wait () call when it is awakened. The ownership of the lock is discarded unless the thread calls notify () and Notify_all ().

In a typical design style, the condition variable is used to Tongxu access to some shared state, and the thread calls wait () repeatedly before acquiring the state it wants. The thread modifying the State calls notify () or Notify_all () when their state changes, in this way, the thread gets as much as possible to the desired waiting state. Example: Producer-consumer model, View Code

The consumer () thread waits for the producer () to be set condition before continuing.

Queue Module

Queue is a pair of queues, it is thread safe

For example, we go to KFC for dinner. The kitchen is for us to cook the place, the front desk is responsible for the kitchen completes the meal to sell the customer, the customer goes to the front desk to collect the good meal. The front desk here is the equivalent of our queue.

This model is also called producer-consumer model.

import Queue q = queue.

Queue (maxsize=0) # Constructs an advanced presentation queue, maxsize specifies the queue length, which is 0 o'clock, which indicates that there is no limit to the length of queues.  Q.join () # When the queue is Kong, perform other Actions q.qsize () # Returns the size of the queue (unreliable) q.empty () # returns True if the queue is empty, return False (unreliable) q.full () # Returns True when the queue is full, otherwise returns false (unreliable) q.put (item, Block=true, Timeout=none) # put item in queue tail, item must exist, parameter block defaults to True, Indicates that when the queue is full, it waits for the queue to give the available location, when False is non-blocking, and the queue is raised if it is full. Full exception. An optional parameter, timeout, indicates the time that the setting will be blocked, and then the queue is raised if it cannot be given a place to put the item.
                      Full exception Q.get (Block=true, Timeout=none) # removes and returns a value for the header of the queue, the optional argument block defaults to True, which means that when the value is fetched, if the queue is empty, blocking, false, not blocking, If the queue is empty at this time, then queue is raised. Empty exception.
An optional parameter, timeout, indicates that the setting is blocked, and then the empty exception is thrown if the queue is empty. Q.put_nowait (item) # is equivalent to put (Item,block=false) q.get_nowait () # is equivalent to get (item,block=false) 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.