Linux system Knowledge-processes & Threads

Source: Internet
Author: User
Tags message queue mutex session id semaphore

Vamei Source: Http://www.cnblogs.com/vamei Welcome reprint, Please also keep this statement. Thank you!

Reference links

Http://www.cnblogs.com/vamei/archive/2012/09/20/2694466.html

Http://www.cnblogs.com/vamei/archive/2012/10/09/2715393.html

Background knowledge

directive: What the computer can do is actually very simple, such as calculating the sum of two numbers and finding an address in memory. These most basic computer actions are called directives.

Program: A set of instructions that make up a series. Through the program, we can let the computer do complex actions. Most of the time the program is stored as an executable file.

Process: A process is a concrete implementation of a program and is a program that is being executed.

Process creation

When the computer is powered on, the kernel only creates an init process. All remaining processes are created by the Init process through the fork mechanism (the new process replicates itself from the old process).

Processes live in memory and each process has its own space in memory

Fork

Fork is a system call.

When the process is fork, Linux opens up a new memory space in memory to the new process, and copies the contents of the old process space into the new process space, after which two processes run concurrently.

Fork is usually called as a function. This function returns two times, returns the PID of the child process to the parent process, and 0 returns to the child process.

Usually after the fork function is called, the program designs an if selection structure. When the PID equals 0 o'clock, indicating that the process is a child process, then let it execute certain instructions, such as using the EXEC library function to read another program file and execute it in the current process space (this is actually one of the purposes we use fork: to create a process for a program) , and when the PID is a positive integer, the description is the parent process, and some additional instructions are executed. This allows the child process to perform a different function than the parent process after it has been established.

Process running Process memory usage (memory address from high to low) stack (stack)

In frame (stack frame).

The parameters and local variables of the current activation function are stored in the frame, along with the return address of the function.

When the program calls the function, the stack grows downward by one frame.

When the activation function returns, the frame pops up from the Stack (POPs, reads and deletes from the stack), and according to the return address of the record in the frame, the command that is directed to the return address is given by control.

Unused areaheap (heap) Global Data

Low to high store constants, initialized global/static variables, uninitialized global/static variables

Text (instruction codes)

Store instruction (i.e. code snippet)

Icon

Note the point:

1. The bottom frame, together with global data, forms the current environment. The currently active function can fetch the required variables from the environment

2.Text and global data are determined at the beginning of the process and remain fixed throughout the process.

Process additional Information

In addition to the contents of the process's own memory, each process also includes additional information, including PID, PPID, Pgid, and so on, to describe process identities, process relationships, and other statistics.

This information is stored in the kernel's memory space, and the kernel holds a variable (task_struct struct) for each process in the kernel's memory space to hold the above information.

The kernel can know the situation of the process by looking at the additional information of each process in its own space, without going into the process itself.

The additional information for each process has a location dedicated to saving the received signal.

Stack

As the process runs, control is continuously transferred between functions by calling and returning functions.

When a process invokes a function, the frame of the original function retains its state when it leaves the original function, and opens the required frame space for the new function.

When the calling function returns, the space occupied by the function's frame is emptied as the frame pops. The process returns to the state held in the frame of the original function and continues execution based on the instruction that the return address points to.

The above process continues, the stack grows or decreases until the main () function returns, the stack is completely emptied, and the process ends.

Heap

When the program uses malloc, the heap grows upward, and the growth is the space that malloc requests from memory.

The space for the malloc application will persist until the free system is used to release it, or the process ends.

Memory leaks-no longer heap space is freed, resulting in heap growth and reduced available memory.

Stack Overflow Stack Overflow

The size of the stack and heap increases or decreases as the process runs, and when the stack top and heap meet, the process has no more memory available. The process stack overflows and the process aborts.

If the cleanup is in time and the stack overflows, you need to increase the physical memory.

Process Group

each process belongs to a process group , and each process group can contain multiple processes.

The process group will have a process group leader process (the process team ID), and the PID of the lead process becomes the ID of the process group, i.e. Pgid.

The lead process can be terminated first, the process group still exists, and the same pgid is held, knowing that the last process in the process group is terminated

The important role of a process group is that it can signal to a process group. All processes in the process group will receive this signal

Session Sessions

multiple process groups form a single session .

A session is established by a process called the session leader, and the PID of the leader process becomes the SID of the recognition session (session ID)

Each process group in a session is referred to as a work (job).

A session can have one process group as the foreground work for the session, and the other process groups work in the background.

Each session can be connected to a control terminal.

When the control terminal has input and output, or the signal generated by the terminal (CTRL+Z/CTRL+C), it will be passed to the foreground work of the session.

A command can be added at the end & Let it run in the background.

Work can be taken from the background through FG to make it a foreground job

Bash supports work control, SH is not supported.

Front desk work

Exclusive stdin, (Exclusive command-line window, can execute other commands only if run is finished or manually terminated)

Can stdout, stderr

Background work

Do not inherit stdin (cannot input, if need to read input, will halt pause)

Inherit stdout, stderr (the output of the background task will be displayed synchronously on the command line)

Sighup Signal

When the user exits the session, the system sends a SIGHUP signal to the session

Session sends the SIGHUP signal to all child processes

Automatically exits after the child process receives the SIGHUP signal

Front desk work will definitely receive sighup signal.

Whether the background work will receive the sighup signal, determined by the Huponexit parameter ($shopt | grep hupon), this parameter determines whether the SIGHUP signal will be sent to background work when the session exits

Disown

Disown can move work out of the background work list, so even if the huponexit parameter is turned on, the system will not send sighup signals to tasks that are not in the background task List at the end of the session

Standard I/O

The standard I/O inherits from the session, and if the session ends, the deleted background task has to use I/O and the error terminates execution.

Therefore, the standard I/O for the target task needs to be redirected.

Nohup

No hang up-not suspended

The NOHUP process no longer accepts SIGHUP signals.

The nohup process closes the stdin, even in the foreground.

The nohup process redirects stdout, stderr to Nohup.out

Screen and Tmux

Function: Terminal multiplexer, can be in the same terminal, management of multiple session

Don't do it in depth.

Multithreading principle

There is only one control in the process of running the program, known as a single thread

There are multiple controls in the process of running the program, called Multithreading

Single-CPU computers, which can be used to switch between different threads, resulting in a multi-threaded effect

Because a stack only the top frame can be read and write, corresponding to the stack top frame corresponding function is in operation (active state)

Multi-process programs bypass the stack limit by creating multiple stacks in a process memory space (separating each stack with a certain amount of white space for stack growth)

Multiple stacks share the text, heap, and global data regions of the process memory.

Because there are multiple stacks in the same process space, any empty space is filled, which can cause stack overflow problems

Multithreading concurrency

Multithreading is equivalent to a concurrent system, and concurrent systems typically perform multiple tasks concurrently

If multiple tasks have operations on shared resources at the same time, it can cause concurrency problems

Concurrency problem resolution is to make the original two instructions a non-delimited atomic operation

Multi-threaded synchronization

Synchronization means that only one thread is allowed to access a resource in a certain amount of time.

Synchronization resources can be implemented through mutexes, condition variables, and read and write locks

Mutex (mutex): A piece of program code is locked, that is, a time period allows only one thread to access the code, the other thread can only wait for the thread to release the mutex to access the code snippet

Conditional variables: Typically combined with mutex locks, resolves the resource waste that occurs when each thread needs to attempt to obtain a mutex and check whether the condition occurs.

Read/write Lock:

An unlock RW lock can be used by a thread to acquire an R lock or a w lock;

If an R lock is obtained by a thread, RW lock can continue to acquire the R lock by another thread without having to wait for the thread to release the R lock. However, if another thread wants to obtain a w lock at this point, it must wait until all the threads holding the shared read lock release the respective R locks.

If a lock is obtained by a thread with a W lock, then other threads, whether they want to acquire an R lock or a W lock, must wait for the thread to release the W lock.

Inter-process communication

IPC (interprocess communication)

Method: Text, signal, pipeline, traditional IPC

Process Communication-Text

One process writes information to the text and another process reads

Very low efficiency because it is on disk

Process Communication-Signal

A very small amount of information can be passed in an integer form

Process Communication-Pipelines

Communication channels can be established between two processes , separated into anonymous pipe pipe and named pipe FIFO

Process Communication-Traditional IPC

The main point is the message queue, the semaphore (semaphore), the shared memory.

These IPC features allow multiple processes to share resources . However, due to the concurrency of multi-process tasks, synchronization issues also need to be addressed.

Traditional IPC-Message Queuing

Similar to pipe, it is also a queue that is first taken out of the queue.

Different points

Message Queuing allows multiple processes to put/take out messages.

Each message can carry an integer identifier (message_type) that can be categorized by the identifier

When a process takes a message out of a message queue, it is taken out in FIFO order, or only some type of message (which is also in FIFO order) can be taken out.

Message Queuing does not use the file API (that is, call file + parameters)

Message Queuing does not automatically disappear, he will remain in the kernel until a process deletes the queue

Traditional Ipc-semaphore

Similar to a mutex, is a process count lock that allows n processes to be taken, and when more processes are requested, it waits for

A semaphore will remain in the kernel until a process deletes it.

Traditional IPC-Shared memory

A process can take a portion of its own memory space and allow other processes to read and write.

When using shared memory, also pay attention to the problem of synchronization.

We can use semaphore synchronization, or we can establish a mutex or other thread synchronization variable in shared memory to synchronize

Process End

When the child process is terminated, it notifies the parent process and empties its occupied memory and leaves its own exit information in the kernel (exit code,0-Normal exit, >0-exception exit), which in this message explains why the process exited.

When the parent process learns that the child process is terminated, the wait system call is used for the child process, and the wait system call takes out the child process's exit information and empties the space occupied by the information in the kernel.

However, if the parent process is older than the child process end, the child process becomes an orphan (Orphand) process. The orphan process will be passed on to the INIT process. The INIT process invokes the wait system call when the child process is terminated.

A bad program can also cause the exit information of the child process to remain in the kernel (the parent process does not call the wait function on the sub-process, the kernel is stuck in the task_struct struct), in which case the child process becomes a zombie process. When a large number of zombie (Zombie) processes accumulate, the memory space is squeezed.

Linux system Knowledge-processes & Threads

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.