How processes communicate with threads in different ways

Source: Internet
Author: User
Tags message queue mutex semaphore

A description of the process, the thread is relatively comprehensive.

The difference between a process and a thread:

A popular explanation

A system running a lot of processes, can be likened to a road with a lot of carriages

Different processes can be understood as different carriages

And the same carriage can have many horses to pull--these horses are threads

Suppose the width of the road is just right through a carriage

The road can be considered a critical resource.

Then the wagon becomes the smallest unit (process) for allocating resources

And the same carriage was driven by many horses (threads)--the smallest operating unit

Number of horses per carriage = 1

So the number of horses = 1 when there is no strict boundary between process and thread, there is only one degree of conceptual differentiation

When horses are 1, you can strictly distinguish between processes and threads.

Professional explanations:

In short, a program has at least one process, and a process has at least one thread.

The thread partitioning scale is smaller than the process, which makes the multithreaded procedure more concurrent. In addition, the process has a separate memory unit during execution, while multiple threads share memory, which greatly improves the efficiency of the program.

Threads are still different from processes during execution. Each independent thread has a program to run the entry, sequential execution sequence, and exit of the program. However, threads cannot be executed independently, and must be dependent on the application, which provides multiple thread execution control by the application.

From a logical point of view, multithreading means that multiple parts of an application can be executed concurrently. However, the operating system does not consider multiple threads as multiple independent applications to implement process scheduling and management and resource allocation. This is the important difference between process and thread.

A process is a program with a certain independent function about a running activity on a data set, and a process is an independent unit of resource allocation and scheduling for a system.

A thread is an entity of a process, the basic unit of CPU scheduling and dispatching, and it is a unit that can operate independently, smaller than a process. The thread itself has essentially no system resources, only a few of the necessary resources in operation (such as program counters, a set of registers and stacks), However, it can share all the resources owned by the process with other threads belonging to one process.

One thread can create and revoke another thread, and multiple threads in the same process can concurrently execute

The main difference between processes and threads is that they are different ways of managing operating system resources. Processes have separate address spaces, and when a process crashes, it does not affect other processes in protected mode, and threads are just different execution paths in a process. Thread has its own stack and local variables, but there is no separate address space between the threads, one thread dead is equal to the entire process dead, so the process of multiple processes than multithreaded program is robust, but in the process of switching, the resource consumption is greater, the efficiency is some worse. However, for concurrent operations that require simultaneous and share certain variables, only threads can be used, and processes cannot be used. If you are interested, I suggest you take a look at the modern operating system or the design and implementation of the operating system. Speak more clearly on a question.

+++

Process Concepts

Process is the basic unit of resource allocation and the basic unit of dispatch operation. For example, when a user runs his or her own program, the system creates a process and assigns IT resources, including various tables, memory space, disk space, I/O devices, and so on. Then, put the process in the ready queue of the process. The process Scheduler selects it, allocates CPU and other related resources to it, and the process does not actually run. Therefore, the process is the unit of concurrent execution in the system.

In the micro-kernel-structured operating system, such as Mac and WindowsNT, the function of the process has changed: it is only the unit of resource allocation, not the unit of dispatch operation. In the microkernel system, the basic unit of the real dispatch operation is the thread. Therefore, the unit that implements the concurrency function is the thread.

Threading Concepts

A thread is the smallest unit of execution in a process, that is, the basic unit of a processor dispatch. If you understand a process as a task that is done logically by the operating system, the thread represents one of the many possible subtasks to complete the task. For example, if a user starts a database application in a window, the operating system represents a call to the database as a process. Suppose a user wants to generate a payroll report from a database and upload it to a file, which is a subtask; in the process of generating a payroll report, the user can also lose a database query request, which is another subtask. In this way, the operating system represents each request-the payroll report and the data query for the new loser-as a separate thread in the database process. Threads can schedule execution independently on the processor, allowing several threads to be on a separate processor in a multiprocessor environment. The operating system provides threads for the convenience and effectiveness of this concurrency

Benefits of introducing threads

(1) Easy to dispatch.

(2) Improve concurrency. It is easy and efficient to implement concurrency through threads. A process can create multiple threads to perform different parts of the same program.

(3) less cost. Creating a line turndown the creation process is fast and requires very little overhead.

(4) Facilitate the full use of multiprocessor functions. By creating a multithreaded process (that is, a process can have two or more threads), each thread runs on one processor, enabling the concurrency of the application so that each processor is fully operational.

++

Process and thread relationships:

(1) A thread can belong to only one process, and a process may have multiple threads, but at least one thread.

(2) The resource is assigned to the process, and all the threads of the same process share all the resources of the process.

(3) The processor is assigned to the thread, that is, the thread that is actually running on the processor.

(4) During the execution of a thread, a collaborative synchronization is required. Synchronization can be achieved by using message communication between threads of different processes.

A thread is a unit of execution within a process and a scheduled entity within a process.

The difference from the process:

(1) Dispatch: As the basic unit of dispatch and distribution, the process as the basic unit of owning resources

(2) Concurrency: Not only can concurrent execution between processes, but also can be executed concurrently between multiple threads of the same process

(3) Owning a resource: a process is a stand-alone unit of resources, and threads do not own system resources, but they can access resources that are subordinate to the process.

(4) Overhead: When you create or undo a process, because the system allocates and reclaims resources for it, the overhead of the system is significantly greater than the cost of creating or undoing the thread. +++

How to communicate between processes:

1. Pipeline (pipe) and famous pipe (named pipe):

Pipelines can be used for communication between parent-child processes that have affinity, and a well-known pipeline allows communication between unrelated processes in addition to the functionality that the pipe has.

2. Signal (signal):

Signal is a software-level simulation of the interrupt mechanism, it is a more complex way of communication, used to inform the process of an event occurred, a process received a signal and the processor received an interrupt request effect can be said to be consistent.

3. Message Queuing:

Message Queuing is a linked table of messages, which overcomes the limitation of the limited semaphore in the two modes of communication, and the process with write permission can add new information to message queues according to certain rules; a process that has read access to a message queue can read information from a message queue.

4. Shared memory (sharing memory):

It can be said that this is the most useful way of interprocess communication. It allows multiple processes to access the same memory space, and different processes can be seen in time to update the data in the shared memory in the other process. This approach relies on some sort of synchronization operation, such as mutexes and semaphores.

5. Signal Volume (semaphore):

A synchronous and mutually exclusive means, primarily as between processes and between different threads of the same process.

6. Socket (socket);

This is a more general communication mechanism between processes, it can be used in the network between the different machines in the process of communication, the application is very extensive.

++

Synchronous communication between Threads:

1. Signal quantity binary signal quantity mutually exclusive signal quantity signal quantity of integer type semaphore

2. Message Message Queue message mailbox

3. Events Event

Mutex semaphore: The same task must be requested, the same task is freed, and the other task release is invalid. The same task can be applied recursively. (Mutual-exclusion semaphore is a subset of binary semaphores)

Binary semaphore: Once a task request succeeds, it can be freed by another task. (The difference from the mutex semaphore)

Integer semaphores: Values are not limited to 0 and 1, one task can be applied and another task released. (contains binary semaphore, the binary semaphore is a subset of the integer signal quantity)

Binary Semaphore Implementation Task mutex:

There is only one printer resource, a BC three task sharing, when a obtains the right to use, in order to prevent other tasks from wrongly releasing the semaphore (binary semaphore allows other tasks to release), the printer room door must be locked up (into the critical section), after use, release the signal volume, and then open the door (out of critical section), Other tasks go in and print. (But the mutex signal quantity must be released by the task that obtains the signal quantity, therefore the other task does not have the error to release the signal quantity the situation appears, therefore does not need to have the critical section. The mutex signal is a subset of the binary semaphore. )

Binary Semaphore Implementation Task synchronization:

A task has been waiting for Semaphore, b task timed release signal, complete the synchronization function

Recorded Semaphore (record semaphore):

Each semaphore s has a list of waiting queues, in addition to an integer value (count), which is the identity of each thread that is blocking the semaphore. When the semaphore is released, a value is added, the system automatically wakes up a waiting thread from the waiting queue, allowing it to get the semaphore, while the semaphore is reduced by one.

+++

The difference between synchronization and mutex:

When there are multiple threads, it is often necessary to synchronize these threads to access the same data or resources. For example, suppose you have a program where one thread is used to read a file to memory and another thread to count the number of characters in a file. Of course, it doesn't make sense to count the entire file before it is transferred into memory. However, since each operation has its own thread, the operating system executes the two threads as separate tasks, possibly counting the number of words when the entire file is not loaded into memory. To work around this problem, you must synchronize two threads.

A mutex is a piece of program that spreads between different processes, and when a process runs one of the pieces of the program, the other processes cannot run any of them, only to wait until the process runs out of the program fragment. If defined with access to resources, a mutually exclusive resource is only allowed to be accessed by one visitor, with uniqueness and exclusive sex. But mutexes cannot limit the order in which visitors access the resource, that is, the access is unordered

Synchronization refers to a number of pieces of programming that take a walk between different processes, and their operation must be carried out in strict accordance with certain priorities, which depend on the specific tasks to be accomplished. If defined by access to resources, synchronization means that, based on mutual exclusion (in most cases), access to resources is achieved through other mechanisms. In most cases, synchronization has been mutually exclusive, especially if all writes to the resource must be mutually exclusive. In a few cases, you can allow multiple visitors to access resources at the same time

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.