Multi-threading and multi-process of programming ideas (1)--describing threads and processes in the context of an operating system

Source: Internet
Author: User
Tags switches

Original: http://blog.csdn.net/luoweifu/article/details/46595285 What is a thread

What is a thread? What does a thread have to do with a process? This is a very abstract issue, but also a very broad topic, involving a lot of knowledge. I can't make sure that I can talk about it or make sure everything is right. Even so, I would like to speak to him as much as possible, to understand a little, because this is a long time to haunt me, a bewildering field of knowledge, I hope that through my understanding to uncover a layer of mysterious veil.

Task scheduling

What is a thread? To understand this concept, you need to understand some of the relevant concepts of the operating system. Most operating systems (such as Windows, Linux) task scheduling is the use of time-slice rotation preemptive scheduling method, that is, a task after a short period of time to force a pause to perform the next task, each task in turn execution. A short period of time when a task is executed is called a time slice, the state of the task being executed is called the running state, the task executes for a period of time to force a pause to perform the next task, and the suspended task is in a ready state waiting for the next time slice that belongs to it. So that each task can be executed, because the CPU execution efficiency is very high, the time slice is very short, in the various tasks quickly switch, give the feeling is that multiple tasks at "simultaneous", which is what we call concurrency (don't think concurrency is more advanced, its implementation is very complex, but its concept is very simple, is one sentence: Multiple tasks are executed at the same time. The multi-tasking run process is as follows:

Figure 1: Task scheduling in the operating system

Process

We all know that the core of the computer is the CPU, it takes all the computing tasks, and the operating system is the manager of the computer, it is responsible for task scheduling, resource allocation and management, commanding the entire computer hardware; The application side is a program with some function, the program is running on the operating system.

Process is a dynamic execution of a program with a certain independent function on a data set, which is an independent unit of the operating system for resource allocation and scheduling, and is the carrier of the application running. A process is an abstract concept that never has a uniform standard definition. The process is generally composed of three parts: program, data collection and process control block. The program is used to describe the function of the process to be completed, is the instruction set to control the execution of the process, the data collection is the program in the execution of the data and work area; program control block, called PCB, contains the process description information and control information, is the process of the existence of the unique symbol.

Characteristics of a process:

Dynamic: The process is a process of execution of a program, is temporary, has a life-time, dynamic production, dynamic extinction;

Concurrency: Any process can execute concurrently with other processes;

Independence: The process is an independent unit for the system to allocate and dispatch resources;

Structure: The process consists of three parts, the program, the data and the process Control block.

Thread

In earlier operating systems there was no concept of threading, a process was the smallest unit that could have resources and run independently, and was the smallest unit of program execution. Task scheduling is a time-slice rotation preemptive scheduling method, and the process is the smallest unit of task scheduling, each process has its own separate piece of memory, so that the memory address between each process is isolated from each other.

Later, with the development of computer, the requirement of CPU is more and more high, the switching overhead between processes is not enough to meet the requirements of more and more complicated programs. So the thread is invented, the thread is a single sequential control process in program execution, the smallest unit of program execution flow, and the basic unit of processor dispatch and dispatch. A process can have one or more threads, and each thread shares the program's memory space (that is, the memory space of the process in which it resides). A standard thread consists of a thread ID, a current instruction pointer (PC), a register, and a stack. The process consists of memory space (code, data, process space, open files), and one or more threads.

The difference between a process and a thread

The process and threads are mentioned earlier, but you may still feel confused and feel like they are very similar. Indeed, processes and threads are inextricably linked, so let's take a step together:

1. A thread is the smallest unit of program execution, and a process is the smallest unit of resources allocated by the operating system;

2. A process consists of one or more threads, which are different routes of execution of code in a process;

3. Processes are independent of each other, but the memory space (including code snippets, datasets, heaps, etc.) and some process-level resources (such as open files and signals) are shared between threads in the same process, and threads in a process are not visible in other processes;

4. Scheduling and switching: Thread context switches are much faster than process context switches.

Threads are related to processes:

Figure 2: Resource sharing relationship between a process and a thread

Figure 3: The relationship between single thread and multithreading

In summary, threads and processes are an abstract concept, and threads are a smaller abstraction than processes that both threads and processes can use to implement concurrency.

In earlier operating systems there was no concept of threading, a process was the smallest unit that could have resources and run independently, and was the smallest unit of program execution. It is equivalent to only one thread in a process, and the process itself is a thread. So threads are sometimes called lightweight processes (lightweight PROCESS,LWP).

Figure 4: The earlier operating system only processes, no threads

Later, with the development of the computer, the efficiency of the context switch between multiple tasks is more and more high, the abstraction of a smaller concept-thread, generally a process will have multiple (but also a) thread.

Figure 5: The presence of a thread that allows a process to have multiple threads

Multi-Threading vs. multicore

The above-mentioned time-slice rotation schedule means that a task executes for a short period of time after forcing a pause to perform the next task, with each task rotating. Many operating system books say "only one task is executing at the same time." So someone might want to ask a dual-core processor? Aren't two cores running at the same time?

In fact, the phrase "only one task at a time" is not accurate, at least it is not comprehensive. How does a thread execute in the case of a multicore processor? This requires an understanding of kernel threads.

Multicore (Heart) processor is the integration of multiple computing cores on a single processor to improve computing power, which is the processing core of multiple true parallel computations, one kernel thread per processing core. The kernel thread (Kernel thread, KLT) is a thread that is supported directly by the operating system kernel, which completes the thread switchover by the kernel, which dispatches the thread through the Action scheduler and is responsible for mapping the thread's tasks to each processor. Typically a processing core corresponds to a kernel thread, such as a single-core processor corresponding to a kernel thread, a dual-core processor corresponding to two kernel threads, and a quad-core processor corresponding to four kernel threads.

Now the computer is generally a dual core four threads, quad core Eight threads, is the use of Hyper-threading technology to simulate a physical processing core into two logical processing core, corresponding to two kernel threads, so the number of CPUs seen in the operating system is the actual number of physical CPUs twice times, such as your computer is a dual core four threads, open "task Manager \ Performance "can see 4 CPU monitors, quad core eight threads can see 8 CPU monitors.

Figure 6: Results of dual-core four threads viewed under WINDOWS8

Hyper-Threading technology is the use of special hardware instructions, a physical chip into the two logical processing core, so that a single processor can use thread-level parallel computing, and thus compatible with multi-threaded operating systems and software, reduce the idle time of the CPU, improve the efficiency of the CPU. This Hyper-threading technology (such as a dual core four thread) is determined by the processor hardware and requires the support of the operating system to be displayed on the computer.

Programs typically do not go directly to kernel threads, but instead use a high-level interface of kernel threads-the lightweight process (light Weight PROCESS,LWP), the lightweight process is the thread we normally speak of (we call it a user thread), Because each lightweight process is supported by a kernel thread, it is only possible to have a lightweight process by supporting kernel threads first. There are three models for the correspondence between user threads and kernel threads: one-to-one models, many-to-two models, and many-to-many models, which illustrate three models in 4 kernel threads and 3 user threads.

One-to-one model

For a one-to-one model, a user thread uniquely corresponds to a kernel thread (which in turn is not necessarily true, and a kernel thread does not necessarily have a corresponding user thread). Thus, if the CPU is not using Hyper-threading technology (such as a quad-core four-thread computer), a user thread uniquely maps to a thread of a physical CPU, and concurrency between threads is real concurrency. A one to one model allows the user thread to have the same advantages as a kernel thread, which is not affected by the execution of other threads when a thread is blocked for some reason, and this allows the multi-threaded process to perform better on multiprocessor systems.

But a one-to-one model also has two drawbacks: 1. Many operating systems limit the number of kernel threads, so a one-to-one model limits the number of user threads, and 2. When many operating system kernel threads are dispatched, context switching is more expensive, resulting in less efficient execution of user threads.

Figure 7: One-to-one model

Many-to-one model

Many-to-one model maps multiple user threads to a kernel thread, and the switch between threads is performed by the user-state code, so the thread switching speed of a model is much faster than a single-to-one model, and there is almost no limit to the number of user threads in a multiple-to-one model. However, there are two disadvantages to a multiple-to-one model: 1. If one of the user threads is blocked, all other threads will not be able to execute because the kernel thread is also blocked; 2. On multiprocessor systems, the increase in the number of processors does not significantly increase the threading performance of many-to-one models because all user threads are mapped to a single processor

Figure 8: Many-to-one model

Many-to-many models

Many-to-many models combine the advantages of a one-to-one model and a multiple-pair model to map multiple user threads to multiple kernel threads. The advantages of many-to-many models are: 1. Blocking of one user thread does not cause all threads to block, because there are other kernel threads that are scheduled to execute; 2. Many-to-many models have no limit on the number of user threads; 3. In multiprocessor operating systems, many-to-many model threads can get a certain performance boost, but not as high as one-to-one models.

In today's popular operating system, most of the many-to-many models are used.

Figure 9: Many-to-many models

viewing processes and Threads

An application may be multithreaded or multi-process, how can it be viewed? Under Windows we only have to open Task Manager to see the number of processes and threads for an application. Press Ctrl+alt+del or the right-click Shortcut toolbar to open Task Manager.

To view the number of processes and threads:

Figure 10: Viewing the number of threads and processes

Under the Processes tab, we can see the number of threads that an application contains. If an application has multiple processes, we can see each process, as in, Google Chrome browser has more than one process. Also, if you open more than one instance of an application there will be multiple processes, such as I have opened two cmd windows, there are two cmd processes. If you do not see the number of threads, you can increase the listening column by clicking on the "View \ Select Columns" menu.

To view CPU and memory utilization:

In the Performance tab, we can see the CPU and memory usage, depending on the number of monitors recorded by the CPU and the number of logical processing cores, such as my dual-core four-thread computer with four monitors.

Figure 11: Viewing CPU and memory utilization

The life cycle of a thread

When the number of threads is less than the number of processors, the concurrency of the threads is real concurrency, and different threads run on different processors. However, when the number of threads is greater than the number of processors, the concurrency of the threads is hampered by the fact that this is not true concurrency because at least one processor will run multiple threads at this time.

Concurrency is a simulated state when multiple threads are running on a single processor. The operating system takes a time-slice rotation to perform each thread in its own way. Today, almost all modern operating systems use time-slice-based preemptive scheduling, such as the familiar UNIX, Linux, Windows, and Mac OS X. Popular operating systems.

We know that threads are the smallest unit of program execution and the smallest unit of task execution. In an earlier process-only operating system, the process had five states, creating, ready, running, blocking (waiting), and exiting. The early process is equivalent to the current process of only a single thread, so now there are five of multithreading, and now the life cycle of multithreading is similar to the life cycle of the early process.

Figure 12: The life cycle of an early process

Processes run in three states: Ready, run, block, create, and exit status describes the process creation process and exit process.

Created: The process is being created and cannot be run yet. The work that the operating system will do when creating a process includes allocating and establishing Process control block table entries, establishing resource tables and allocating resources, loading programs, and establishing address spaces;

Ready: The time slice is exhausted, this thread is forced to pause, waiting for the next time slice to come;

Run: This thread is executing and is occupying a time slice;

Blocking: Also called a wait state, waiting for an event (such as IO or another thread) to finish;

Exit: The process has ended, so also called the end state, releasing the resources assigned by the operating system.

Figure 13: The life cycle of a thread

Create: A new thread is created, waiting for the thread to be invoked to execute;

Ready: The time slice is exhausted, this thread is forced to pause, waiting for the next time slice to come;

Run: This thread is executing and is occupying a time slice;

Blocking: Also called a wait state, waiting for an event (such as IO or another thread) to finish;

Exit: A thread completes a task or other termination condition occurs, the thread terminates into an exit state, and the exit State frees the resources allocated by the thread.

Multi-threading and multi-process of programming ideas (1)--describing threads and processes in the context of an operating system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.