[CLR via C #] 25. Thread Basics

Source: Internet
Author: User

When Microsoft designed the OS kernel, they decided to run each instance of the application in a process. A process is a collection of resources used by an instance of an application. Each process is assigned a virtual address space to ensure that the code and data used by one process cannot be accessed by another. This ensures the robustness of the application assembly, because one process cannot destroy the data and code in another process. In addition, the process cannot access the kernel code and data of the OS. If an application enters an endless loop, if it is only a single-core CPU, It will be executed in an infinite loop and cannot execute other code, which will cause the system to stop responding. In this regard, Microsoft has come up with a solution-thread. The role of a thread is to virtualize the CPU. Windows provides a dedicated thread for each process (equivalent to a CPU, which can be understood as a logical CPU ). If the application code enters an infinite loop, the processes associated with the code will be "Frozen", but other processes will not be frozen and will continue to be executed. Although the thread is very powerful, like all virtualization mechanisms, the thread will generate space (memory consumption) and time (execution performance at runtime) overhead.
  • Thread kernel object)OS allocates and initializes this data structure for each thread created in the system. This data structure contains a set of attributes that describe threads. The data structure also contains the so-called thread context ). The context is a memory block that contains a set of CPU registers. When Windows runs on an x86CPU computer, the thread context uses approximately 700 bytes of memory. For x64 and IA64CPU, the context uses approximately 1240 bytes of memory and 2500 bytes of memory respectively.
  • Thread environment block (TEB)TEB is a memory block allocated and initialized in user mode. TEB consumes 1 Memory Page (4 kb in x86 and x64CPU, 8 kb in IA64CPU ). TEB contains the thread's first exception handling link. Each try block entered by the thread inserts a node at the beginning of the chain. When the thread exits the try block, the node is deleted from the chain. In addition, TEB also includes "Thread Local Storage" data of threads and some data structures used by GDI and OpenGL graphics.
  • User-mode stack)The user mode stack is used to store local variables and real parameters passed to the method. It also contains an address: the address from which the execution should start when it points to the current method returned. By default, Windows allocates 1 MB of memory to the user mode of each thread.
  • Kernel-mode stack)When the application code transmits real parameters to a function in the kernel mode in the OS, the kernel mode stack is used. For security reasons, Windows copies any real parameters transmitted from the user mode code to the kernel from the user mode stack of the thread to the kernel mode stack of the thread. Once copied, the kernel can verify the value of the real parameter and then process it. In addition, the kernel will call its own internal methods, and use the kernel mode stack to pass its own real parameters, local variables for storing functions, and storage return addresses. During 32-bit Windows running, the kernel mode stack size is 12 Kb; when running on 64-bit Windows, the size is 24 KB.
  • Notification of DLL thread connection (attach) and thread separation (detach)In Windows, when a thread is created in a process, the DLLMain method of all DLL loaded in that process is called and a DLL_THREAD_ATTACH ID is passed to the method. Similarly, when a thread is terminated, the DLLMain method of all DLL in the process is called and a DLL_THREAD_DETACH ID is passed to the method. Some DLL need to use these notifications to execute some special initialization or resource cleanup operations for each thread created and destroyed in the process.
Now you know the overhead of all the space and time required to create a thread, let it enter the system, and finally destroy it. Now we will discuss context switching. A single CPU computer can only do one thing at a time. Therefore, Windows must share the physical CPU among all threads in the system. At any time, Windows only allocates one thread to one CPU. That thread allows a "time slice" to run ". Once the time slice expires, Windows switches the context to another thread. Each context switch requires Windows to perform the following operations.
  • Saves the value in the CPU register to a context structure inside the kernel object of the currently running thread.
  • Select a thread from the existing thread set for scheduling (this is the thread to switch ). If the thread is owned by another process, Windows must switch the virtual address space that the CPU "sees" before running any code or data.
  • Load the value in the selected context structure to the CPU register.
After the context switch is complete, the CPU executes the selected thread until its time slice expires. Next, another context switch will take place. Windows performs context switching every 30 ms. Context switching is a net overhead. That is to say, the overhead produced by context switching will not be exchanged for any memory or performance gains. Windows performs context switching to provide users with a robust and responsive operating system. In fact, context switching may affect performance more than you think. The CPU needs to execute a different thread, and the previous thread code and data are stored in the high-speed cache of the CPU, so that the CPU does not need to Access RAM by the process. When the Windows context switches to a new thread, this new thread is very likely to execute different code and data, and the data is no longer in the CPU cache. Therefore, the CPU must Access RAM to fill in its cache to restore the notification execution status. However, after 30 milliseconds, a new context switch occurs again. In addition, when garbage collection is executed, CLR must suspend all threads and traverse their stacks to find the root so as to mark the objects in the heap, traverse their stacks again, and restore all threads again. Therefore, reducing the number of threads also significantly improves the functions of the garbage collector. Based on the above discussion, we have concluded that threads must be avoided as much as possible, because they consume a large amount of memory and require a considerable amount of time to create, destroy, and associate. WIndows also wastes more time in context switching and garbage collection. But it is undeniable that Windows has become more robust and responsive. It should be noted that computers with multiple CPUs can actually allow several threads at the same time, which improves the scalability of applications (doing more in less time ). Windows assigns a thread to each CPU kernel, and each kernel executes context switching to other threads by itself. Windows ensures that a single thread is not scheduled on multiple kernels at the same time. If performance is pursued, the optimum number of threads on any computer is the number of CPUs on that computer. If the number of threads exceeds the number of CPUs, thread context switching and performance loss will occur. In Windows, creating a process is expensive. It usually takes several seconds to create a process. A large amount of memory must be allocated. The memory must be initialized, And the EXE and DLL files must be loaded from the disk. On the contrary, it is very cheap to create a thread in Windows. Therefore, the developer decides to stop the creation process and change it to the creation thread. This is why we see such multithreading. However, threads are more expensive than other system resources, so they should be saved for use. It must be admitted that most threads in the system are created by local code. Therefore, the user mode stack of the thread only retains (pre-defined) the address space, and it is very likely that it is not completely committed to obtain the physical memory. However, as more and more applications become hosted applications, or when managed components are running in them, more and more stacks are fully committed and actually allocated to 1 MB of physical memory. In any case, all threads will still be allocated to the kernel mode stack and other resources even if the user mode stack is left aside. This idea is that the thread creation momentum must be stopped when it is very cheap. CLR currently uses Windows Thread processing capabilities. Although the current CLR thread directly corresponds to a Windows Thread, the Microsoft CLR team reserves the permission to detach it from the Windows thread in the future. One day, CLR may introduce its own logic thread concept, so that a CLR logic thread is not necessarily mapped to a physical Windows Thread. It is said that logical threads will use more resources than physical threads, so they can run a large number of logical threads on a very small number of physical threads. This section describes how to create a thread and perform an asynchronous computing restriction operation. Although it will teach you how to do it, we strongly recommend that you avoid using the technology shown here. On the contrary, we should try to use the CLR thread pool to execute the asynchronous computing restriction operation, which will be discussed later. If the code to be executed is required to be in a specific State, which is unusual for threads in the thread pool, you can consider creating a thread. For example, you can explicitly create your own thread if any of the following conditions are met.
  • The thread must run at a non-normal thread priority.. All threads in the thread pool run with a normal priority. Although this priority can be changed, it is not recommended to do so. In addition, priority changes cannot be sustained between different thread pool operations.
  • The thread needs to be represented as a foreground threadTo prevent the application from terminating before the thread ends its task. All threads in the thread pool are backend threads. If CLR wants to terminate threads, they may be forced to fail to complete the task.
  • A computing restricted task takes a long time to run. The logic used by the thread pool is complicated to determine whether an additional thread needs to be created. This problem can be avoided by directly creating a dedicated thread for a long-running task.
  • To start a threadAnd may call the Abort method of Thread to terminate it in advance.
To create a dedicated Thread, you must construct an instance of the System. Thearding. Thread class and pass the name of a method to its constructor. The following is the prototype of the Thread constructor:
Public sealed class Thread: CriticalFinalizerobject,... {public Thread (ParameterizedThreadStart start); // unusual constructors are not listed here}

delegate void ParameterizedThreadStart(Oject obj);
Constructing a Thread object is a lightweight operation because it does not actually create an operating system Thread. To create an operating system Thread and run the callback method, you must call the Start method of the Thread to pass the object (State) to it as the real parameter of the callback method ). The following code demonstrates how to create a dedicated thread and make it call a method asynchronously:
Internal static class FirstThread {public static void Go () {Console. writeLine ("Main thread: starting a dedicated thread" + "to do an asynchronous operation"); Thread dedicatedThread = new Thread (ComputeBoundOp); dedicatedThread. start (5); Console. writeLine ("Main thread: Doing other work here... "); Thread. sleep (10000); // simulate other work (10 seconds) dedicatedThread. join (); // wait for the thread to terminate the Console. readLine () ;}// before this method, it must match the private static void ComputeBoundOp (Object state) with the ParametizedThreadStart delegate. {// This method is executed by a dedicated thread. writeLine ("In ComputeBoundOp: state = {0}", state); Thread. sleep (1000); // simulate other tasks (1 second) // after this method is returned, the dedicated thread will terminate }}
Compile and run on my machine. The following results may be obtained: Main thread: starting a dedicated thread to do an asynchronous operationMain thread: Doing other work here... in ComputeBoundOp: state = 5 but sometimes run the above Code, the following results may also be obtained, because I cannot control the way Windows schedules two threads: Main thread: starting a dedicated thread to do an asynchronous operationIn ComputeBoundOp: state = 5 Main thread: Doing other work here... note the Join of the Go () method call. The Join method causes the calling thread to block any code currently executed until the thread represented by dedicatedThread is destroyed or terminated. There are three reasons for using threads:
  • You can use threads to isolate code from other codes.This improves application reliability. In fact, this is why Windows introduces the thread concept in the operating system.
  • Allows threads to simplify encoding.Sometimes, if a task is executed in its own thread, the encoding becomes simpler. Generally, when you introduce a thread, you introduce code that requires mutual cooperation. They may require the thread to synchronously construct the code to know when the other thread is terminated. Once collaboration is involved, more resources are required and the code becomes more complex. Therefore, before developing and using a thread, make sure that the thread can help you.
  • Concurrent processing can be implemented using threads.If you know that your applications are running on multiple CPU machines, you can improve performance by running multiple tasks at the same time.
Preemptible(Preemptive) the operating system must use an algorithm to determine when to schedule which threads for how long. This section describes the algorithms used in Windows. As mentioned above, the kernel object of each thread contains a context structure. The context structure reflects the status of the CPU register of the thread during the last execution of the thread. After a time slice, Windows checks all existing thread memory objects. Among these objects, only those threads that are not waiting are suitable for scheduling. In Windows, select a schedulable thread Kernel Object and switch the context to it. Windows records the number of times each thread is switched to context. You can use a tool such as Microsoft Spy ++ to view the data. Windows is called a preemptible multi-threaded operating system because a thread can be stopped (preemptible) at any time and scheduled by another thread. Therefore, you cannot guarantee that your thread is always running or stop other threads from running. Each thread is assigned a priority from 0 (lowest) to 31 (highest. When the system decides which thread to allocate to a CPU, it first checks the threads with priority 31 and schedules them in turn. As long as there is a thread with a scheduling priority of 31, the system will never allocate any thread with a priority of 0 to 30 to the CPU. This situation is called Hunger(Starvation ). This situation occurs when a thread with a higher priority occupies too much CPU time and fails to run a thread with a lower priority. The possibility of hunger on a multi-processor machine is much lower, because the thread with priority 31 and the thread with priority 30 can run at the same time. The system always keeps the CPU busy. The CPU is idle only when there is no thread to schedule. A thread with a higher priority is always preemptible to a thread with a lower priority, no matter which thread is running with a lower priority. When the system starts, Zero-page thread. This thread has a priority of 0, and the only thread with a priority of 0 in the system. The zero-page thread is responsible for clearing all idle pages of the system RAM when other threads need to be executed. When designing an application, you should decide whether your application needs higher or lower response capabilities than other applications running simultaneously on the machine. Then, select Process Priority(Priority class) to reflect your decision. Windows supports six process priority types: I Dle (idle), Below Noral (Below standard), Normal (standard), abve Normal (higher than standard), High (High) and Realtime (Real-Time). Normal is the default priority class, so it is the most common priority class. Priority classes and priorities are two different concepts. According to the definition, the priority of a thread depends on two standards: 1) Its Process Priority Class 2) in its process priority class, the priority of a thread. Process Priority and thread priority constitute a thread" Basic priority". Note that each thread has a dynamic priority. The thread scheduler determines which thread to run based on the priority. Initially, the dynamic priority of a thread is suitable for the same basic priority. The system can increase or decrease the dynamic priority to ensure its response and avoid "Hunger" in the current processor time ". However, for threads with a base priority between 16 and 31, the system will not increase their priority. Threads with a base priority between 0 and 15 will passively increase their priority. If an application (such as a screen saver) runs when nothing happens to the system, it is suitable to assign an Idle priority class. Some applications that perform statistical tracking analysis need to regularly update the system-related status. Such applications should generally not impede the execution of more critical tasks. The High priority class should be used only when absolutely necessary. The Realtime priority class should be avoided. Realtime has a high priority and may even interfere with operating system tasks, such as blocking necessary disk I/O and network transmission. After selecting a priority class, do not think about the relationship between your application and other applications. Now we should focus all our attention on the threads in the application. Windows supports 7 Relative thread priority: Idle (Idle), Lowest (Lowest), Below Normal (lower than standard), Normal (standard), abve Normal (higher than standard), and Highest (Highest) and Time-Critical (Critical Time (highest relative thread priority )). These priorities are relative to the process priority class. Similarly, Normal is the default relative thread priority, so it is the most common. This does not mention 0 ~ 31. Developers never set the priority of a thread, that is, they do not need to set the priority of a thread to 0 ~ 31. The operating system maps the "priority class" and "relative thread priority" to a specific priority. This ing method varies with the Windows version.

Thread relative

Priority

Process Priority

Idle

Below Normal

Normal

Above Normal

High

Real-Time

Time-critical

15

15

15

15

15

31

Highest

6

8

10

12

15

26

Above normal

5

7

9

11

14

25

Normal

4

6

8

10

13

24

Below normal

3

5

7

9

12

23

Lowest

2

4

6

8

11

22

Idle

1

1

1

1

1

16

Note that the thread priority in the table is not 0. This is because the priority of 0 is retained to the zero-page thread, and the priority of other threads is not allowed to be 0. In addition, the following priority levels are not available: 17,18, 19,20, 21,27, 28,29 and 30. Of course, if you write a device running in kernel mode but a driver, you can obtain these priorities. Note: The concept of "Process Priority" may cause some confusion. People may think that this means that Windows can schedule processes. However, Windows never schedules a process; it only schedules threads. "Process Priority" is an abstract concept proposed by Microsoft to help you understand the relationship between your application and other running applications. It has no other purpose. Tip: it is best to lower the priority of one thread, rather than increasing the priority of another thread. In your application, you can change the relative Thread Priority of the Thread. You need to set the Priority attribute of the Thread and pass it one of the five values defined in the ThreadPriority Enumeration type, that is, Lowest (minimum), Below Normal (lower than standard), Normal (standard), abve Normal (higher than standard), and Highest (Highest ). CLR reserves the Idle and Time-Critical priorities for itself. It should be noted that the System. Diagnostics namespace contains a Process class and a ProcessThread class. These two classes provide the Windows view of processes and threads respectively. Applications must run with special security permissions to use these two classes. For example, you cannot use these two classes in a Silverlight application or an ASP. NET application. On the other hand, applications can use the AppDomain and Thread classes, which expose the CLR views of AppDomain and threads. Generally, you do not need special security permissions to use these two classes, although some operations still need to be upgraded. CLR regards each thread as either the foreground thread or the background thread. When all Front-End threads in a process stop, CLR will force termination to be performed in any running background. These background processes are directly terminated and no exception is thrown. Therefore, foreground processes should be used to execute tasks that you really want to complete, such as flushing data from the memory cache area to the disk. In addition, backend threads should be used for non-critical tasks, such as recalculating the cells in the workbook or indexing the records. This is because the work can continue when the application restarts, and if the user terminates the application, there is no need to force it to remain active. CLR should provide the foreground thread and background thread concepts to better support AppDomain. Each AppDomain can run a separate application, and each application has its own foreground thread. If an application exits, causing its front-end thread to terminate, the CLR still needs to stay active and run so that other applications can continue to run. All applications exit, and the entire process can be destroyed after all Front-End threads are terminated.
Public class Program {public static void Main () {// create a Thread (foreground process by default) Thread t = new Thread (Worker ); // convert the foreground process to the background process t. isBackground = true; t. start (); // Start thread // If t is a foreground process, the application will be terminated after about 10 seconds. // If t is a background process, the application immediately terminates the Console. writeLine ("Ruturning from Main"); Console. read ();} private static void Worker () {Thread. sleep (10000); // simulate the 10-second work Console. writeLine ("Ruturning from Worker ");}}
The lifetime of a thread can be changed from the foreground to the background at any time, or from the background to the foreground. The main Thread of the application and any Thread explicitly created by constructing a Thread object are foreground threads by default. On the other hand, the thread pool is a background thread by default. In addition, any thread created by the local code entering the Managed execution environment is marked as a background thread. Tip: Avoid using foreground threads whenever possible.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.