. Net 4.0 Study Notes (4) -- thread BASICS (II)

Source: Internet
Author: User
Use a dedicated thread to asynchronously execute computing restrictions

 

In this section, I will show you how to create a thread and how to use Asynchronization to execute computing restrictions. Before that, I stressed that you should avoid using the technology I showed you. Instead, you should try to use the CLR thread pool to asynchronously execute computing restrictions. I will elaborate in chapter 26 "asynchronous mode of computing restrictions.

However, in some cases, you may need to explicitly create a thread to execute a special computing restriction operation. Typically, If you executeCodeA thread in a special State is required. Unlike a thread in a thread pool, you can create a dedicated thread. For example, you can create your own thread in the following conditions:

    • You need the thread to allow a non-normal priority. All threads in the thread pool can have a normal priority. Of course, this can be changed, but it is not recommended that the priority change will not continue during thread pool operations.
    • You need to run the thread as the foreground thread to preventProgramTerminate until the thread completes the task. For more information, see the section "Comparison Between foreground threads and background Threads. Thread Pool threads are always background threads. If CLR decides to terminate the process, they will not complete the task.
    • A task restricted by computing takes a very long time; in this way, I won't burden the thread pool with logic because it tries to find out whether an additional thread needs to be created.
    • I want to start the thread and probably end it too early using the thread. Abort method. (In chapter 22nd, "CLR host and application domain ")

To create a proprietary thread, you must use the constructor of the system. Threading. Thread class to pass the name of the executed method to its constructor. The following is the prototype of the constructor of thread:

 

 
Public sealed class thread: criticalfinalizerobject,... {public thread (parameterizedthreadstart start); // less commonly used constructors are not shown here}

The parameter of the Start method specifies a method that will be executed by a dedicated thread. This method must be the same as the signature entrusted by parameterizedthreadstart. (6) (threadstart is not recommended for delegation)

 

 
Delegate void parameterizedthreadstart (Object OBJ );

Constructing a thread object is a lightweight operation because it does not actually need to create a physical thread of the operating system. To create an operating system physical thread and let it execute the return method, you must call the thread. Start method and pass the object (State) as the callback method parameter. The following code creates a dedicated thread and executes it asynchronously:

 

Using system; using system. threading; public static class program {public static void main () {console. writeline ("main thread: starting a dedicated thread" + "to do an Asynchronous Operation"); thread dedicatedthread = new thread (computeboundop); dedicatedthread. start (5); console. writeline ("main thread: doing other work here... "); thread. sleep (10000); // simulating other work (10 seconds) dedicatedthread. join (); // wait for thread to terminate console. writeline ("hit <enter> to end this program... "); console. readline ();} // This method's signature must match the parameterizedthreadstart delegateprivate static void computeboundop (object state) {// This method is executed by a dedicated thread console. writeline ("in computeboundop: State = {0}", State); thread. sleep (1000); // simulates other work (1 second) // when this method returns, the dedicated thread dies }}

When I compile and run this code, I get the following output:

Main thread: starting a dedicated thread to do an asynchronous operationmain thread: doing other work here... In computeboundop: State = 5

Main thread: starting a dedicated thread to do an Asynchronous Operation

Main thread: doing other work here...

In computeboundop: State = 5

Sometimes when I run this code, I will get the following output because I cannot control how Windows transfers these two threads:

Main thread: starting a dedicated thread to do an Asynchronous Operation

In computeboundop: State = 5

Main thread: doing other work here...

Note that the join method is called in the main function. The join method causes the calling thread to stop running any code until the thread marked with dedicatedthread destroys itself or is terminated.

 

Thread usage reasons

 

There are three reasons for using threads:

    • You can use threads to isolate code.This can improve program reliability. In fact, this is why Windows introduces the thread concept in the operating system. Windows requires a thread for reliability, because your program is a third-party component for the operating system, and Microsoft is not sure about the quality of your code before the release. However, you can test before release. Since you test the entire program, you should understand that they are robust and high-quality. In this case, the robustness required by your program is not as high as that of the operating system. Therefore, your program does not need to use threads to improve robustness. If your program supports loading components developed by others, your program needs to be more robust and the use of threads can meet this requirement.
    • You can use threads to make your code more concise.Sometimes it is very easy to use its own thread to execute tasks. Of course, when you do this, you are using additional resources and are not effectively writing code. Now, I can write concise code on some important resources. If I do not do this, I will still write the machine language instead of being a C # developer. But sometimes I see people using threads to choose a much easier programming technique. In fact, they actually make their lives (and code) more complex. Generally, when you import a thread, you also need the thread synchronization structure to determine when other threads end. Once you do this, you are using more resources and making your code more difficult. So before you start using threads, make sure they are actually helpful to you.
    • You can use a thread to process data simultaneously.If you know that your program is running in a multi-processor, You can execute tasks concurrently to improve performance. Nowadays, multi-processor systems are very common, so it makes sense for your program to support multi-processor systems. For more information, see Chapter 26th and Chapter 27th.

 

Now, I will share my point of view with you. Each computer has an incredible powerful resource in it: the CPU itself. If someone spends money on a computer, the computer will continue to run. In other words, I believe that the CPU usage should always reach 100%. I will give you two warnings. First, if the computer uses a battery, you will not want to keep the CPU running for 100%, because it will soon consume power. Second, some data centers prefer to have 10 machines with 50% CPU utilization instead of 5 machines with 100% CPU utilization, because the CPU running at full speed often produces a great deal of heat, this requires cooling systems. cooling systems such as HVAC (Power Supply and air conditioning) are more cost-effective than multiple computers, reducing operation. Although the data center finds that the cost of managing multiple machines is getting higher and higher, because each machine has to regularly upgrade hardware, software, and displays, it costs less than running a cooling system.

Now, if you agree with me. The next step is to figure out what the CPU should do. Let me tell you something before I give you my ideas. In the past, developers and users always felt that computers were not powerful enough. Therefore, developers do not simply execute code unless the user gives us permissions and specifies that CPU resources are consumed through UI controls such as menus, buttons, and check boxes.

But now everything has changed. The development of computers brings about huge computing power, and even more powerful computing power will appear in the near future. At the beginning of this chapter, I showed you that the task manager reported that my CPU usage was 0% at that time. If my computer is a quad-core instead of dual-core, the task manager reports 0% more frequently. When 80 cores surfaced, the machine will always do nothing. For computer purchasers, it seems that they are spending more money to buy more powerful CPUs, but computers do less work.

This is why it is difficult for computer manufacturers to sell multi-core computers to users: hardware resources are hard to be used by software, and users cannot benefit from more CPUs. I would like to say that we now have a lot of computing power, and more, so developers can actively consume. Yes. In the past, it was hard to imagine that our program would execute some calculations unless we knew what the user wanted to calculate. But now we have extra computing power, and we can dream about it.

Here is an example: When you stop editing in the Visual Studio Editor, Visual Studio automatically compiles and compiles your code. This makes developers incredibly productive, because they can see warning errors while typing in the source file, so that they can be quickly resolved. In fact, the developer's edit-compile-Debug cycle today will change to-edit-Debug cycle, because the compilation code is always happening. You, as a user, do not care about this, because there are a lot of computing capabilities available, and other things you are doing are affected by the frequent compilation of compilers. In fact, I expect that in the future Visual Studio versions, the compilation menu will disappear completely, because it is all automated. Not only is the UI of the program simpler, but the "Answer" provided by the program to users makes them more productive.

When we remove a UI component like a menu, the computer becomes more concise. There will be fewer options and fewer concepts for them to read and understand. This is the multi-core revolution, allowing us to remove these UI items, so that the software is more concise for users, making my grandmother feel comfortable using computers one day. For developers, removing UI items usually results in less testing and provides fewer options for users' basic code. If you now localize the UI item text and your documents (similar to Microsoft), removing the UI item means that you can write fewer documents without having to localize them. All of this will save your organization time and money.

There are some unpretentious CPU consumption: spelling check and syntax check, recalculating workbooks, indexing your disk files to accelerate search, and sorting out your hard disk to improve Io performance.

I want to live in a world where the UI is reduced and concise. I will have more visible areas to display the data I am working on. The program provides me with information, it helps me complete my work faster and more effectively, instead of telling the program to get information for me. I think this kind of hardware has been available for developers in recent years. This is the time for software to use hardware creatively.

 

Thread Scheduling and priority

 

A preemptive operating system must have a certainAlgorithmTo determine when the thread will be scheduled and how long it will take to schedule. In this section, let's take a look at the algorithms used in windows. In the early stages of this chapter, I mentioned that each thread kernel object has a context structure. This context structure reflects the CPU register status of the last thread execution. After a time slice, Windows displays all existing thread kernel objects. Among these objects, only threads that are not waiting can be scheduled. In Windows, select a scheduling thread Kernel Object and run the context to switch to it. Windows actually saves the number of context switches for each thread. You can use Microsoft spy ++ to view details. Figure 25-5 shows the attributes of a thread. Note that this thread has been scheduled for 32768 times.

At this point, the thread is executing code and operating data in its process address space. After another time slice, Windows performs another context switch. Windows performs context switching from system startup to shutdown.

Windows is called a preemptive multi-threaded operating system, because one thread can be stopped at any time and the other thread can be scheduled. As you can see, there are not many such controls. Remember, you cannot ensure that your thread is always running and other threads are not allowed to run.

Each thread is assigned a priority from 0 (lowest) to 31 (highest. When the system determines which thread is allocated to the CPU, it first checks the threads with priority 31 and schedules them in a round robin mode. If a thread with a priority of 31 is scheduled, it is allocated to the CPU. At the end of a thread time slice, the system checks whether another thread with a priority of 31 can run. If yes, it allows the thread to be allocated to the CPU.

As long as a thread with a priority of 31 is scheduled, the system no longer assigns other threads with a priority from 0 to 30 to the CPU. This condition becomes hunger, which occurs when high-priority threads can allocate a lot of CPU time to prevent low-priority threads from executing. Hunger rarely occurs on a multi-processor because a thread with a priority of 31 and a thread with a priority of 30 can run simultaneously. The system always tries to keep the CPU busy, and the CPU is idle only when there is no thread to schedule.

Higher-priority threads always occupy lower-priority threads regardless of whether the lower-priority threads are being executed. For example, if a thread with a priority of 5 is running and the system detects that a higher-priority thread is ready, the system immediately suspends this low-priority thread (calculating that it is in the middle of the time slice) then allocate the CPU to a higher-priority thread to obtain a complete time slice.

By the way, when the system starts, it creates a special thread called a 0-page thread. This thread is assigned a priority of 0 and only runs in the system with a priority of 0. This 0-page thread is responsible for setting the idle Memory Page to zero when no other thread executes the work.

Microsoft understands that it is too difficult for developers to assign priority to threads for no reason. Should this thread have a priority of 10? Should another thread be 23? To solve this problem, Windows exposes an abstraction layer on the thread priority system.

When designing your program, you should decide whether your program needs to respond more or less than other programs. You choose the process priority to reflect your decision. Windows supports six process priorities: idle, lower than standard, standard, higher than standard, high, and real-time. Of course, the standard is the default, so it is the highest priority.

The idle priority is perfect for programs that keep running but do nothing (such as screen saver ). A computer that does not interact with each other may remain busy. Do not use screen saver to compete for CPU time. The statistical tracking program regularly updates the status of the system and generally does not interfere with more critical tasks.

Use a high priority only when absolutely necessary. You should avoid using real-time priority. Real-time priority is very high and can interfere with operating system tasks, such as preventing IO requests and network traffic. In addition, the thread of the real-time priority process can prevent the keyboard and mouse from responding in a timely manner, causing the user to feel that the system is frozen. Basically, you should have a good reason to use real-time priorities. For example, you want to use a small amount of latency to respond to hardware events or execute short-lived tasks.

Once you select a priority, you should stop thinking about how your program associates with other programs and focus only on the threads in your program. Windows supports seven related thread priorities: idle, lowest, lower than standard, standard, higher than standard, highest, key. These priorities are related to process priorities. Once again, the general priority is the default, so it is the most common.

So, simply put, your process has a priority, and the thread priority you assign is associated with each other. You will notice that I did not say the priority from 0 to 31. Developers do not directly deal with priorities. As an alternative, the system maps to the process priority and is associated with the thread priority. Table 25-1 shows how process priorities and thread priorities are mapped to priority levels.

For example, a standard priority thread in a standard priority process is assigned a priority of 8th. Because most processes have a standard priority and most threads have a standard priority, all threads are assigned a 8th priority.

If you have a standard-priority thread in a high-priority process, the thread will be assigned a 13th-level priority. If you change the priority of a process to idle, the priority of the thread is changed to 4. Remember, the thread priority is associated with the process priority. If you change the priority of a process, the priority of the thread will not change, but its priority number will change.

Note: This table does not display any threads with a priority of 0. This is because the 0 priority is prepared for the 0-page thread, and the system no longer assigns this priority to other threads. Moreover, the following priorities cannot be assigned to: 17,18, 19,20, 21,27, 28,29, or 30. If you write a device driver in kernel mode, you can get these priorities; user mode is not. Note that a thread with a real-time priority process cannot be lower than priority 16. Similarly, others with no real-time priority cannot exceed 15.

Generally, a process with a given priority is based on the process that enables it to start. Most processes are started by Windows Explorer, and their priority for creating child processes is standard. Hosting programs are supported because they have their own processes; they are supported in appdomain, so hosting programs do not support changing the priority of their processes, this will affect all the code running in the process. For example, many ASP. NET programs run a process independently, and each program has its own appdomain. This is also true for Silverlight programs. They all run in the Internet browser process and are hosted in the stored procedure and run in the Microsoft SQL server process.

In other words, if your program can set the thread. the priority attribute is used to change the priority of a thread. One of the five values defined in the threadpriority enumeration is passed (lowest, lower than standard, standard, higher than standard, and highest ). However, Windows provides priority of level 0th and real-time range for itself, while CLR provides idle and critical priority for itself. Today, no threads in CLR run at priority, but this will change in the future. However, the CLR parser thread discusses "Automatic Memory Management" in Chapter 21st and runs at a critical priority. Therefore, as a hosting developer, you can get the five highlighted thread priorities listed in Table 25-1.

I should point out that the system. Diagnostics namespace contains the process class and processthread class. These classes provide processes and threads from the Windows perspective. These classes provide developers with managed code to facilitate code debugging. In fact, this is why it is called the system. Diagnostics namespace. The program needs to run in a special security license to use these two classes. For example, you cannot use these classes in Silverlight or ASP. NET programs.

On the other hand, programs can use appdomain and thread classes, which expose appdomain and thread from the CLR perspective. In most cases, although running these two classes does not require special security licenses, some operations are still required.

 

Foreground thread and background thread

 

CLR deems that each thread is either a foreground thread or a background thread. When all foreground threads of a process terminate, CLR forces the end of any background threads that are still running. These background threads are terminated immediately; no exception is thrown.

Therefore, you should use the foreground thread to execute the tasks you want to complete, such as submitting data from the memory buffer to the hard disk. You should use background threads to execute non-critical tasks, such as recalculating spreadsheet cells or index records, because these tasks can be executed in one place when the program is restarted, if the user wants to terminate it, there is no need to force the program to survive.

CLR needs to provide the foreground thread and background thread concepts to better support appdomain. As you can see, each appdomain can run in a separate program, and each appdomain has its own foreground thread. If an appdomain exits because its foreground thread is terminated, the CLR still needs to continue execution so that other AppDomains can continue execution. When all AppDomains exit, all Front-End threads are terminated, and the entire process is destroyed.

The following code demonstrates the differences between foreground and background threads:

 

  Using  System;
Using System. Threading;
Public Static Class Program {
Public Static Void Main (){
// Create a new thread (defaults to foreground)
Thread t = New Thread (worker );
// Make the thread a background thread
T. isbackground = True ;
T. Start (); // Start the thread
// If T is a foreground thread, the application won't die for about 10 seconds
// If T is a background thread, the application dies immediately
Console. writeline ( " Returning from main " );
}
Private Static Void Worker (){
Thread. Sleep ( 10000 ); // Simulate doing 10 seconds of work
// The line below only gets displayed if this code is executed by a foreground thread
Console. writeline ( " Returning from worker " );
}
}

 

It is very likely that at any time in its life cycle, a thread changes from the foreground thread to the background thread. The main thread and other threads of any program are explicitly created as foreground threads by default by a thread object. On the other hand, the thread pool thread is a background thread by default. Any thread created by the Local Code enters the Managed execution environment and is marked as a background thread.

 

What about now?

 

In this chapter, I have explained the basic concepts of the thread. I hope that I can make it clear that the thread is a very expensive resource and should be saved. It is best to use the CLR thread pool. The thread pool automatically creates and destroys the management threads for you. Multiple Threads can be created in the thread pool to be reused in different tasks. In this way, your program can complete tasks only by some threads.

In Chapter 26th, I will focus on how to use the CLR thread pool thread to execute computing restrictions. In Chapter 27th, I will discuss how to use a thread pool to execute Io restrictions in combination with the CLR asynchronous programming mode.

......

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.