Daemons, mutexes, IPC mechanisms, producer consumer models

Source: Internet
Author: User
Tags mutex throw exception

One, add:
From multiprocessing import Process
Import time, OS

Def task ():
Print ('%s is running '% os.getpid ())
Time.sleep (3)

if __name__ = = ' __main__ ':
p = Process (Target=task)
P.start ()
P.join () # waits for the process p to end, the Join function sends a system call to wait, to tell the operating system to reclaim the ID number of the process P

Print (p.pid) #??? Can I see the ID number of the child process p at this time
Print (' master ')

Answer: Yes
Analysis:
P.join () is like the operating system sends the request, tells the operating system P's ID number does not need to occupy again, recycles can,
You can also see P.pid in the parent process, but at this point the p.pid is a meaningless ID number because the operating system has recycled the number


Second, the Guardian process

1, what Guardian process
The daemon is actually a ' child process ', and the daemon will die after the code of the main process has finished running.

2, why use the Guardian process
There are two key words:
When do I need to open a subprocess?
When the parent process needs to execute a task concurrently, the task needs to be placed in a sub-process

When do I need to set up a child process as a daemon?
When the code in the child process does not exist after the parent process code has finished running, it should be
The child process is set as a daemon and will die after the parent process code ends


From multiprocessing import Process
Import Time,os

def task (name):
Print ('%s is running '%name)
Time.sleep (3)

if __name__ = = ' __main__ ':
P1=process (target=task,args= (' Guardian process ',))
P2=process (target=task,args= (' normal sub-process ',))

P1.daemon = True # Be sure to put it before P.start ()
P1.start ()
P2.start ()

Print (' master ')


#主进程代码运行完毕, the daemon will be over.
From multiprocessing import Process

Import time
def foo ():
Print (123)
Time.sleep (1)
Print ("end123")

def bar ():
Print (456)
Time.sleep (3)
Print ("end456")

if __name__ = = ' __main__ ':
P1=process (Target=foo)
P2=process (Target=bar)

P1.daemon=true
P1.start ()
P2.start ()
Print ("main-------")


General computer
‘‘‘
Main-------
55W
enn456
‘‘‘
Computer performance is slightly faster
‘‘‘
Main-------
123
456
enn456
‘‘‘
Very fast computer performance
‘‘‘
123
Main-------
456
end456
‘‘‘

I'll never see end123.



Third, mutual exclusion lock
Mutex: You can change the part of the code that will perform the task (the code that only involves modifying the shared data) into a serial
Sacrificing efficiency, but keeping data safe.

Mutual exclusion Lock vs p.join ()
1. Mutex is local serial
2.p.join () is the overall serial of all code to perform the task

#文件db的内容为: {"Count": 1}
#注意一定要用双引号, or JSON is not recognized

From multiprocessing import Process,lock
Import JSON
Import OS
Import time
Import Random

def check ():
Time.sleep (1) # Analog Network Latency
With open (' db.txt ', ' RT ', encoding= ' utf-8 ') as F:
Dic=json.load (f)
Print ('%s ' check see remaining votes [%s] '% (Os.getpid (), dic[' Count ']))

Def get ():
With open (' db.txt ', ' RT ', encoding= ' utf-8 ') as F:
Dic=json.load (f)
Time.sleep (2)
If dic[' count '] > 0:
# There are tickets
dic[' Count ']-=1
Time.sleep (Random.randint (1,3))
With open (' db.txt ', ' wt ', encoding= ' utf-8 ') as F:
Json.dump (DIC,F)
Print ('%s successfully purchased '%os.getpid ())
Else
Print ('%s No more tickets '%os.getpid ())


def task (Mutex):
# ticket
Check ()

#购票
Mutex.acquire () # Mutex cannot be acquire continuously, must be release to re-acquire
Get ()
Mutex.release ()


# with Mutex:
# get ()

if __name__ = = ' __main__ ':
Mutex=lock ()
For I in range (10):
P=process (target=task,args= (mutex,))
P.start ()
# P.join ()


Iv. IPC Mechanism (* * * * *)
IPC: interprocess communication, available in two ways
1, Pipe:
2, queue:pipe+ Lock

Q=queue () #先进先出
Attention:
1, the queue is occupied by memory space
2, should not be in the queue to enlarge the data, should only hold a small amount of data messages

Master of
Q.put (' first ')
Q.put ({' K ': ' Sencond '})
Q.put ([' Third ',])
# Q.put (4)

Print (Q.get ())
Print (Q.get ())
Print (Q.get ())
Print (Q.get ())

Understand the
Q=queue (3) #先进先出
Q.put (' first ', block=true,timeout=3)
Q.put ({' K ': ' Second '},block=true,timeout=3)
Q.put ([' Third ',],block=true,timeout=3)
Print (' ===> ')
# q.put (4,block=true,timeout=3)

Print (Q.get (block=true,timeout=3))
Print (Q.get (block=true,timeout=3))
Print (Q.get (block=true,timeout=3))
Print (Q.get (block=true,timeout=3))

Can be used with timeout when block=true



Q=queue (3) #先进先出
Q.put (' first ', Block=false,)
Q.put ({' K ': ' Sencond '},block=false,)
Q.put ([' Third ',],block=false,)
Print (' ===> ')
# Q.put (4,block=false,) # Block=false queue full of direct throw exception, will not block

Common way:
For I in range (10):
Q.put (I,block=false)

Print (Q.get (block=false))
Print (Q.get (block=false))
Print (Q.get (block=false))
Print (' Get over ')
# Print (Q.get (block=false))



Q=queue (3) #先进先出

Q.put_nowait (' first ') #相当于q. Put (' first ', Block=false,)
Q.put_nowait (2)
Q.put_nowait (3)
# q.put_nowait (4)

Print (Q.get_nowait ()) #相当于q. Get (Block=false)
Print (q.get_nowait ())
Print (q.get_nowait ())
Print (q.get_nowait ())



V. Producer consumer model (******)

1 What is the producer consumer model
Producer: A metaphor for the task of generating data in a program
Consumer: A metaphor for the task of processing data in a program

Media (queue) shared by producer--<-consumer

2 Why Use
Realize the decoupling between producer and consumer, the producer can keep producing, the consumer can also keep consuming
Thus balancing the production capacity of producers and consumer spending ability, improve the overall operation of the program efficiency

When do you use it?
When there are two distinct types of tasks in our program, one is responsible for generating the data and the other is responsible for processing the data
At this point you should consider using the producer consumer model to improve the efficiency of the program.


Simple implementation:
From multiprocessing import queue,process
Import Time,os,random

def producer (q):
For I in range (10):
Res= ' bun%s '%i
Time.sleep (Random.randint (1,3))
# Lost in the queue #
Q.put (RES)
Print (' \033[45m%s produced%s\033[0m '% (Os.getpid (), res))


def consumer (q):
While True:
#从队列里取走
Res=q.get ()
If Res==none:break
Time.sleep (Random.randint (1,3))
Print (' \033[43m%s ate%s\033[0m '% (Os.getpid (), res))

if __name__ = = ' __main__ ':
Q=queue ()
# producers '
P1=process (target=producer,args= (q,))
# Consumers
C1=process (target=consumer,args= (q,))

P1.start ()
C1.start ()

Print (' master ')

Problem: The program didn't end because P1 ended and C1 didn't end.


Fully implemented:
From multiprocessing import queue,process
Import Time,random

def producer (NAME,FOOD,Q):
For I in range (3):
res= '%s%s '% (food,i)
Time.sleep (Random.randint (1,3))
# Lost in the queue #
Q.put (RES)
Print (' \033[45m%s produced%s\033[0m '% (name,res))
# q.put (none) should not put none here, if there is a consumer to take away the none will end (should not end, there is data behind)

DEF consumer (NAME,Q):
While True:
#从队列里取走
Res=q.get ()
If Res is none:break
Time.sleep (Random.randint (1,3))
Print (' \033[41m%s ate%s\033[0m '% (name,res))

if __name__ = = ' __main__ ':
Q=queue ()
# producers '
P1=process (target=producer,args= (' Egon ', ' Bun ', q))
P2=process (target=producer,args= (' wxx ', ' dumplings ', q))
P3=process (target=producer,args= (' LXX ', ' chaos ', q))
# Consumers
C1=process (target=consumer,args= (' Alex ', Q))
C2=process (target=consumer,args= (' per ', q))

P1.start ()
P2.start ()
P3.start ()
C1.start ()
C2.start ()

P1.join ()
P2.join ()
P3.join ()
# at the end of the P1\P2\P3, we should put the end signal in the queue, a few consumers should put a few none
Q.put (None)
Q.put (None)

Ultimate Edition:


From multiprocessing import joinablequeue,process
Import time
Import OS
Import Random

def producer (NAME,FOOD,Q):
For I in range (3):
res= '%s%s '% (food,i)
Time.sleep (Random.randint (1,3))
# Lost in the queue #
Q.put (RES)
Print (' \033[45m%s produced%s\033[0m '% (name,res))
# q.put (None)

DEF consumer (NAME,Q):
While True:
#从队列里取走
Res=q.get ()
If Res is none:break
Time.sleep (Random.randint (1,3))
Print (' \033[46m%s ate%s\033[0m '% (name,res))
Q.task_done () #发送一次信号 to prove that a data has been taken away

if __name__ = = ' __main__ ':
Q=joinablequeue ()
# producers '
P1=process (target=producer,args= (' Egon ', ' Bun ', Q,))
P2=process (target=producer,args= (' Eight commandments ', ' swill ', Q,))
P3=process (target=producer,args= (' monkey ', ' Xiang ', Q,))
# Consumers
C1=process (target=consumer,args= (' Week ', Q,))
C2=process (target=consumer,args= (' Wu ', Q,))
C1.daemon=true
C2.daemon=true Daemon (be sure to put it on start)

P1.start ()
P2.start ()
P3.start ()
C1.start ()
C2.start ()

P1.join ()
P2.join ()
P3.join ()

Q.join () #等待队列被取干净
# q.join () End means: The code of the main process is finished---> (the producer is finished) + the data in the queue is also cleared. The meaning of the consumer does not exist

Daemons, mutexes, IPC mechanisms, producer consumer models

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.