Linux multithreading practice (1) and linux multithreading practice

Source: Internet
Author: User

Linux multithreading practice (1) and linux multithreading practice
Thread Concept

In a program, an execution route is called a thread ). A more accurate definition is that a thread is a "control sequence/command sequence within a process ";

All processes have at least one execution thread;

Process VS. Thread 

1. A process is the basic unit of resource allocation (the process needs to participate in resource competition), while a thread is the smallest unit of Processor Scheduling (program execution;

2. threads share process data, but they also have some of their own (very few O (worker _ worker) O ~) Data, such as thread ID, program counter, a set of registers, stacks, errno (error code), signal status, priority, etc;

3. threads inside a process can share resources, such as code segments, data segments, open files, and signals.

 

 

 

Fork VS. pthread_create

When a process executes a fork call, a new copy of the process will be created, and the new process will have its own variables and its own PID. The running time of this new process is independent, and it is almost completely independent of the process (parent process) that creates it during execution ).

 

When a new thread is created in a process, the new execution thread will have its own stack (So it also has its own local variable ), however, we need to share the global variables, file descriptors, signal processors, and the current working directory status with its creators (for example, the top three codes, data, and files are shared !);

 

 

 

 

Thread advantages

Q: Creating a New thread costs much less than creating a new process (so sometimes a thread is called a lightweight process)

Compared with switching between processes, switching between threads requires much less work from the operating system (improving concurrency)

Q threads consume much less resources than processes.

Q can make full use of the number of concurrent Processors

Q. When the slow I/O operation ends, the program can execute other computing tasks.

Q computing-intensive applications, in order to run on a multi-processor system, the computing is decomposed into multiple threads for implementation.

Q I/O-intensive applications overlap I/O operations to improve performance. Threads can wait for different I/O operations at the same time.

 

Thread disadvantages

Q performance loss

A computing-intensive thread that is rarely blocked by external events often cannot share the same processor with its threads. If the number of computing-intensive threads is more than the number of available processors, there may be a large performance loss. The performance loss here refers to the increase in additional synchronization and scheduling overhead, the available resources remain unchanged.

 

Q robustness reduction

Writing multithreading requires more comprehensive and in-depth consideration. In a multithreaded program, there is a high possibility of adverse effects due to slight deviations in time allocation or shared variables that should not be shared, therefore, if one thread in a process crashes, other threads may crash! In other words, there is a lack of protection between threads.

 

Q lack of access control

A process is the basic granularity of access control. For example, calling certain OS functions in a thread may affect the entire process. For example, if a thread changes the current working directory, other threads also change.

 

Q programming difficulty Improvement

Writing and debugging a multi-threaded program is much more difficult than a single-threaded program;

 

Competition scope of Thread Scheduling

The operating system provides various models for scheduling threads created by applications. The main difference between these models is that when competing for system resources (especially CPU time), thread scheduling competition scope (thread-scheduling contention scope) is different:

1. process contention scope: Each thread competes with the "scheduled CPU time" in the same process (but not directly competing with threads in other processes ).

2. system contention scope: the thread directly competes with other threads in the system scope.


Multithreading Model

1. N: 1 [How to map multiple user-level threads to one kernel-level thread in the early OS]

"Thread implementation" is built on the "Process Control" mechanism and managed by the user space library. The OS Kernel does not know the thread information at all. These threads are called user space threads.

These threads work in the "process competition scope ";

 

 

Advantage: in the N: 1 thread model, the kernel does not interfere with any life activity of the thread or the thread environment switch in the same process. Thread management is actually performed in the user space, so the efficiency is relatively high;

Disadvantages:

(1) multiple threads in a process can only be scheduled to one CPU. This constraint limits the total number of available parallel threads.

(2) If a thread executes a "blocking" operation (such as read), All threads in the process will be blocked until the operation ends. To eliminate this restriction, some threads provide wrapper for these blocking functions and replace these system calls with non-blocking versions.

 

2. [map each user-level thread to a kernel-level thread]

In the core thread model, each thread created by an application is directly managed by a core thread.

The OS Kernel transfers each core thread to the system CPU. Therefore, all threads work in the "System Competition scope ".

Advantage: When a thread is blocked, another thread can continue to run, improving the concurrency capability. However, the creation and scheduling of such threads are completed by the kernel, because the system overhead of such threads is relatively large (but generally it is less overhead than the process)

 

 

3. n: M [map N user-level threads to M kernel-level threads, requiring N> = M, contemporary Posix Threads (that is, the thread model used in Linux)]

The N: M thread model provides two levels of control. The user thread is mapped to the system's scheduler to implement parallel LWP. This scheduler is called a lightweight process (LWP: lightweight process ), LWP is mapped to the Core Thread one by one. [thread-> LWP-> Core thread (involved in scheduling)]

 

A lightweight process is a user thread supported by the kernel and an abstract object of the kernel thread. Each thread has one or more lightweight threads, and each lightweight thread is bound to one kernel thread.

N: M-thread model overcomes the low concurrency of many-to-one models, and overcomes the overhead of a user thread in one-to-one model that occupies too many kernel-level threads. it has the advantages of multiple-to-one and one-to-one models;

 

 

Thread implementation Classification

(1) user-level threads

The user-level thread mainly solves the context switching problem. The scheduling algorithm and scheduling process are all determined by the user's choice, and no specific kernel support is required during running. Here, the operating system usually provides a user-space thread library that provides features such as thread creation, scheduling, and revocation, while the kernel only manages processes. If a thread in a process calls a blocked system call function, the process, including all other threads in the process, is also blocked. The main disadvantage of this kind of user-level thread is that it cannot take advantage of the multi-processor in the scheduling of multiple threads in a process.

 

(2) kernel-level threads

This type of thread allows threads in different processes to be scheduled according to the same relative priority scheduling method, so as to take advantage of the concurrency advantage of multi-processor.

Currently, most systems use the coexistence of user-level threads and core-level threads. A user-level thread can correspond to one or several core-level threads, that is, one-to-one or multiple-to-one models. This not only meets the needs of a multi-processor system, but also minimizes scheduling overhead.

 

Summary:

The thread mechanism greatly accelerates context switching and saves a lot of resources. However, because both the user State and the kernel state need to implement scheduling management, the complexity of implementation and the possibility of priority turning will be increased. Synchronous design and debugging of a multi-threaded program also increases the difficulty of program implementation.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.