Getting Started with Python Learning (23)--Process __python

Source: Internet
Author: User
Tags chmod semaphore python script least privilege

This article describes Python's OS packages with functions for querying and modifying process information, which are compatible with the Linux system concepts, so they can help understand the Linux system.

1. Process information

The related functions in the OS package are as follows:

Uname () returns information about the operating system, similar to the uname command on Linux.

Umask () Sets the permissions mask the process creates a file, similar to the Umask command on Linux.

get* (*) query (* replaced by the following)

UID, Euid, Resuid, GID, Egid, Resgid: Permission-related, where resuid is primarily used to return the saved UID. See Linux users and the "least privilege" principle

PID, Pgid, Ppid, sid: Process related. See the Linux process relationship for a related introduction

put* () setting (* replaced by the following)

Euid, Egid: Used to change euid,egid.

UID, GID: Changes the UID of the process, GID. Only super user has the right to change the process UID and GID (meaning to run Python in $sudo python).

Pgid, sid: Changes the process group and session that the process is in.

Getenviron (): Getting the environment variables for the process

Setenviron (): Changing the environment variable for a process

Example 1, the real UID and real GID of the process, #注释后面是结果

#!/usr/bin/env python

@author: Homer

import OS

print (Os.getuid ())              # 1000

print (Os.getpid ()) #              9047
print (Os.getppid ())             # 6829

Print (Os.getgid ())              # 1000
print (os.getgroups ())           # [4, MB, 124, 1000]

print ( Os.getenv ("Java_home",))      #/home/homer/eclipse/jdk1.6.0_22

Save the above program as a file, using $python and $sudo python to see the results

2. About saved UID and saved GID

Saved UID and saved GID are difficult to work with in a Python program as we described in Linux users and the "least privilege" principle. The reason is that when we write a Python script, we actually run the Python interpreter instead of the Python script file (while the C language runs directly from the C language to the execution file). We have to change the permissions of the Python execution file itself to use the saved UID mechanism, but this is extremely risky.

For example, our Python execution file is/usr/bin/python (you can learn by $which Python)

Let's take a look first.


The results:

-rwxr-xr-x root root

We modify the permissions to set the set UID and set GID bits (refer to the Linux user and the "least privilege" principle)

$sudo chmod 6755/usr/bin/python

/usr/bin/python's permissions become:

-rwsr-sr-x root root

Subsequently, we run the file under, this file can be Vamei by ordinary users all:

Import OS
print (Os.getresuid ())

We get the result:

(1000, 0, 0)

The above are uid,euid,saved UID respectively. We can get super user privileges only by executing a Python script that is owned by an ordinary user. So, this is extremely dangerous, and we are handing over the system's protection system. Imagine Python's powerful features that others can now use to attack your weapon. Use the following command to revert to the previous:

$sudo chmod 0755/usr/bin/python


get*, set*

Umask (), uname ()

Python Multi-process

1. Threading and multiprocessing

The multiprocessing package is a multi-process Management Pack in Python. With threading. Thread is similar, it can take advantage of multiprocessing. Process object to create a procedure. The process can run functions written inside a python program. The process object is the same as the thread object's usage, as well as the method of start (), run (), join (). In addition, there are lock/event/semaphore/condition classes in the multiprocessing package (these objects can be passed through parameters to each process like multithreading) to synchronize the process in a way that is consistent with the same class in the threading package. Therefore, a large part of multiprocessing and threading use the same set of APIs, but in the context of multiple processes.

But when using these shared APIs, we need to be aware of the following points: on Unix platforms, when a process is terminated, the process needs to be called by its parent process, or the process becomes a zombie process (Zombie). Therefore, it is necessary to call the join () method for each Process object (actually equivalent to wait). For multithreading, there is no such necessity because there is only one process. Multiprocessing provides IPC (such as pipe and queue) that are not in the threading package, which is more efficient. Priority should be given to pipe and queue, avoiding the use of synchronization methods such as Lock/event/semaphore/condition (because they occupy not the resources of the user process). Multiple processes should avoid sharing resources. In multiple threads, we can easily share resources, such as using global variables or passing parameters. In the case of multiple processes, the above method is not appropriate because each process has its own separate memory space. At this point we can share resources through shared memory and manager methods. But this improves the complexity of the program and reduces the efficiency of the program because of the need for synchronization.

The PID is stored in the, and if the process does not have a start (), then the PID is none.

We can see from the following program that the thread object and the process object's usage similarity is different from the result. Each thread and process does one thing: print PID. The problem is that all tasks output to the same standard output (stdout) when printed. The output characters are mixed together and cannot be read. With lock synchronization, you can prevent multiple tasks from simultaneously exporting to the terminal after one task output completes and another task output is allowed.

#!/usr/bin/env python

@author: Homer

import OS
Import Threading
Import multiprocessing

# worker function
def worker (sign, lock):
    Lock.acquire ()
    print (sign, os.getpid ())
    lock.release ()

# main
print (' main: ', Os.getpid ())      # main Process

# multi-thread record
= []
lock  = Threading. Lock () for
I in range (5):
    thread = Threading. Thread (target=worker,args= (' thread ', lock))
    Thread.Start ()
    record.append (thread) for

thread in Record:
    print (thread)
    thread.join ()

# multi-process record
= []
lock = multiprocessing. Lock () for
I in range (5):
    process = multiprocessing. Process (target=worker,args= (' process ', lock))
    Process.Start ()
    record.append (process) for

process In record:
    print (process)
    process.join ()

Run Result:

(' Main: ', 9904)
(' thread ', 9904)
(' thread ', 9904)
(' thread ', 9904)
(' Thread ' <thread (Thread-1, stopped 140098907965184) >
<thread (Thread-2, stopped 140098907965184) >
<thread (Thread-3, stopped 140098899572480) >
<thread (Thread-4, started 140098907965184) >
, 9904)
<thread (Thread-5, started 140098899572480) > (
' Thread ', 9904)
(' process ', 9914) ('
Process ', 9915 '
(' proces<process (Process-1, stopped) >s ', 9916)

<process (Process-2, stopped) >
<process (Process-3, started) >
<process (Process-4, started) >
(' Process ', 9917)
<process (Process-5, started) >
(' Process ', 9918)

All thread pid is the same as the main program, and each process has a different PID.

Using the mutiprocessing package to change Python multithreading and multithreaded programs in synchronization to multiple process programs

2. Pipe and queue

As we have in the pipeline pipe and message queues in Linux multi-line thread, there are pipe classes and queue classes in the multiprocessing package that support both IPC mechanisms respectively. Pipe and queue can be used to transfer common objects.

1 pipe can be one-way (half-duplex) or bi-directional (duplex). We passed the mutiprocessing. Pipe (Duplex=false) creates a one-way pipe (the default is bidirectional). A process enters an object from one end of the pipe, and then pipe the process at the other end to receive the object. A one-way pipe allows only process input at one end of a pipe, while a two-way pipe allows input from both ends.

The following program shows the use of pipe:

#!/usr/bin/env python

@author: Homer

import Multiprocessing as Multipro

def proc1 (pipe):
    pipe.send (' hello ')
    print (' Proc1 recv: ', Pipe.recv ())

def proc2 (pipe):
    print (' proc2 recv: ', Pipe.recv ())
    pipe.send (' Hello, Too ')

# Build a pipe
pipe = Multipro. Pipe () The

Pipe to process 2
p1 = Multipro. Process (Target=proc1, args= (Pipe[0],)) # Pass the '
pipe to process 1
p2 = Multipro. Process (TARGET=PROC2, args= (pipe[1)) P1.start () P2.start () P1.join () p2.join ()

Run Result:

(' Proc2 rec: ', ' hello ')
(' Proc1 rec: ', ' Hello, too ')

The pipe here are two-way.

When the pipe object is established, it returns a table containing two elements, each representing one end of the pipe (Connection object). We call the Send () method on one end of the pipe to transfer the object and receive it at the other end using recv ().

2 queue and pipe similar, are advanced first out of the structure. However, the queue allows multiple processes to be placed, and multiple processes take objects out of the queue. The queue is created using Mutiprocessing.queue (maxsize), maxsize represents the maximum number of objects that can be stored in the queue.

The following program shows the use of the queue:

#!/usr/bin/env python #-*-coding:utf-8-*-' @author: Homer @see: ' import OS import multiprocessing Import Time # input worker def inputq (queue): info = str (os.getpid ()) + ' (Put): ' + str (time.strftime ("%y-%m-%d__%h: %m:%s ", Time.localtime (Time.time ())) Queue.put (info) # OUTPUT worker def outputq (queue, lock): info = Queue.get (                        Lock.acquire () print (str (os.getpid ()) + ' (GET): ' + info + ' \ n ') lock.release () # Main record1 = [] # store Input Processes record2 = [] # store Output processes lock = Multiproce Ssing. Lock () # To prevent messy print queue = multiprocessing. Queue (3) # Input processes for I in range: Process = multiprocessing. Process (TARGET=INPUTQ, args= (queue,)) Process.Start () record1.append (process) # Output processes for I in range (1 0): Process = multiprocessing. Process (TARGET=OUTPUTQ, args= (queue, Lock)) Process.Start () Record2.append (proceSS) for P in Record1:p.join () Queue.close () # No More object would come, close the queue for P in RECORD2:P.J Oin ()

Run Result:

10370 (GET): 10357 (ON): 2013-12-11__19:32:09

10369 (GET): 10356 (Put): 2013-12-11__19:32:09 10372

(GET): 10359 (Put): 2013-12-11__19:32:09

10371 (GET): 10360 (Put): 2013-12-11__19:32:09 10378 (ON):

10366 (Put): 2013-12-11 __19:32:09

10374 (GET): 10365 on: 2013-12-11__19:32:09 10376 (Get

): 10364 (Put): 2013-12-11__19:32:09

10380 (GET): 10361 (ON): 2013-12-11__19:32:09

10381 (GET): 10368 (Put): 2013-12-11__19:32:09 10383

(GET): 10367 (Put): 2013-12-11__19:32:09

Some processes use put () to place a string in the queue, which contains the PID and the time. Other processes are removed from the queue and print their own PID and a string of get ().


Process, Lock, Event, Semaphore, Condition

Pipe, Queue

Blog Star selection, please vote for me:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.