Reading experience-programmer-C # thread Reference Manual (multi-thread Technical Analysis)

Source: Internet
Author: User

In the past few days, it took some time to browse the C # thread reference manual, which is useful for beginners...

The book can be obtained in my csdn download channel, please buy the original book support genuine (http://lzhdim.download.csdn.net /).

I bought an Intel engineer a few years ago and wrote "multi-coreProgramThe Design Technology book was originally intended to introduce the design of multi-core programs. Due to time issues, the topic was changed to "parallel programming, however, the important content of this book has not been recorded. It is a pity that we will make up for it later. (Now there are many parallel programs)

In fact, Intel has organized many parallel program activities and programming topics to promote its multi-core CPU and pave the way for promoting the design and development of parallel programs. After all, it is an old partner with Microsoft, I have never used my software to drive the development of your hardware, or even more of my hardware series to support your software upgrades.

In fact, I think it is quite slow for the development of multiple CPU cores. As early as many years ago, DSP hardware had already supported parallel processing, and many chip series, development boards, and so on had been in full swing for those applications (at that time, the CPU was still single core, the server needs several CPUs, that is, the motherboard has several CPU slots ). However, the development of computer CPU is slow. One reason is that the development of hardware technology is limited (in fact, it is also very fast, Intel has always used Moore's Law to upgrade and develop hardware ), it is mainly about the competition of the NM level. One is about the price. After all, it takes some time for the new technology to be put into production after it is successfully studied in the laboratory; one is the upgrade of the previous CPU series. Manufacturers need time to promote and sell their corresponding computer products, such as the motherboard and memory (product line updates are a big problem ); another important thing is the support of the operating system. The operating system needs to better utilize hardware capabilities and extract more value from hardware based on new hardware upgrades. The value of the operating system is not only in combination with hardware, but also in providing a better customer experience. (Currently, the development of GPU is quite fast, hitting the CPU limelight. The initial application of CPU is computing. The result is that the computing capability of GPU is greatly applied, which is really ironic to the CPU .)

If there are so many ink marks, go to the subject...

I. Speaking of threads, it starts with the hardware CPU. Early CPU technology, single-core technology, such as hyper-Threading Technology, essentially maps logically (not physically) to another CPU core and then shares the CPU cache, simulation of Multi-core applications by means of software allocation and scheduling (the underlying hardware layer requires the underlying softwareCodeTo support, that is, the Data Processing code inside the chip is the operating system, and the operating system needs to access the hardware through the device driver ). This type of CPU that supports hyper-threading can be viewed in the Windows Task Manager with two or more CPU usage records, but it is still essentially 1/2 hard core (current CPU, such as 4-core, if hyper-Threading Technology is supported, eight cpu records are displayed, and so on ). Intel started to launch hyper-threading CPU technology, which seems to be quite good, But Intel once stopped the application of this hyper-Threading Technology due to early hardware design, driver, and operating system support problems. But later, because the technology was mature, it began to be applied to the CPU. Or the actual core is the truth.

The CPU does not know what thread it is. It is only responsible for data processing. The early bus-type technology provided relatively small bandwidth. With the development of hardware, the speed of CPU processing data has been limited. Therefore, the latest QPI technology came out, the bandwidth has increased. Of course, it is currently only supported on boards such as x58 and p55. The new technology is always expensive. For the CPU, it only knows the binary commands (PROTEUS Instruction Set and CISC Instruction Set) and binary data, and the length (number of digits) of the data, that is, 32bit and 64bit, determine the size of the data processed by the CPU. Chip-level codeAlgorithmWhich idle CPU core has been controlled and scheduled to process parallel data. Therefore, the operating system only needs to call the driver provided by the hardware CPU vendor and control the operation of the thread queue. In essence, it is used for intermediary applications.

II. The scheduling sequence of the thread is described here. User Application-> Operating System-> Hal-> driver-> motherboard North Bridge chipset (p55 only South Bridge) -> motherboard bus-> CPU core scheduling algorithm-> CPU Instruction Set-> CPU cache-> CPU core. (This order is my understanding of hardware. If you have different opinions, please criticize and correct them ).

We use parallel programs written in C #, managed by CLR, and threads in parallel programs managed by the operating system .. The Net Framework has provided a set of methods for thread calls, taking into account issues such as thread creation, update, communication, synchronization, data lock, and asynchronous communication. Therefore, unless necessary, try to use the methods provided by the Framework to operate the thread to achieve better performance and efficiency and control.

1. Understand the life cycle of a thread.

It is mainly to understand the state changes of the thread, to understand the running mechanism of the thread later, and to provide the basis for controlling the thread using code.

Describes the operations and status of threads in C. You should have an impression on this image. It is easy to apply these methods later.

2. Understand the thread environment.

To use the thread operation function provided by C #, you must first understand the running environment of the thread. Shows the basic environment.

Is a brief environment description. The CLR runs on the operating system, while the managed application process runs under the control of the CLR. The application domain corresponds to the Assembly. In each domain, there may be no threads or multiple threads.

3. Call the thread.

The Calling thread in C # is very simple. Thread t = new thread (New threadstart (function); T. Start. Here, the thread recycling is also handled by GC.

The thread priority is equally important, and of course it cannot be set at will. Too many high-priority threads will seize CPU resources, which will lead to a reduction in the operating system performance. Special attention should be paid to thread synchronization and thread security. Otherwise, resource competition and deadlocks may occur.

For thread synchronization, the. NET Framework provides several operation classes for processing and control. This requires in-depth understanding and selective use of various operation applications to provide application performance.

For example, locking shared resources and important code segments is often implemented using the following code:

Code 1 Lock ( Object )
2 {
3 // Do something
4 // Deal with object
5 }

This is a common method. The code generated in Il is similar to the following code, and there is no difference in Il.

Code
code highlighting produced by actipro codehighlighter (freeware)
http://www.CodeHighlighter.com/
--> 1 monitor. Enter ( Object );
2 // do something
3 // deal with object
4 monitor. Exit ();

There is a small problem here, monitor. enter (object); this method will cause the thread to wait (and a deadlock may occur) during resource object contention ), therefore, the content suitable for processing in this thread is the application that needs to wait for processing to complete. If the thread schedules the thread or the thread monitors an application of a resource, you must use bool B = monitor. tryenter (object); if the method does not obtain the resource, B will be false. In this case, you can use B as the branch to determine whether to execute the data processing code. Otherwise, you can end the thread, wait for the next thread to access the resource, so you do not have to wait for the resource to be released. The selection of an application depends on the analysis and design of the actual environment.

4. Thread Pool technology.

The thread pool technology saves the time for creating new threads in the efficiency of multi-threaded programs and converts it into a scheduling application for thread resources. Of course, the thread pool is not omnipotent. It is mainly used for processing in a short thread, but not for a large and long application. For applications in the thread pool, you can directly use the threadpool operation class in the. NET Framework. The built-in processing method works with the operating system and is an efficient solution for applying the thread pool.

in special cases, if you need to establish a thread pool yourself (or store the object pool of other objects similar to the thread ), we recommend that you use hashset generic classes for processing, instead of using Arrays for storage. This book uses the arraylist array for processing. The thread pool is usually fixed in size, so an array is used for processing. However, arraylist is also a variable-length array. In terms of processing the stored content, arrays are also stored on the managed stack, but its region is a continuous memory region, which is a feature and an advantage. Hashset uses the hash Method for storage. For the processing of stored content, the operations for adding or removing content are more efficient than the processing of fixed arrays, if an array is deleted from an intermediate content, it needs to be recycled to fill the subsequent content in the deleted area, which may reduce the efficiency. You can write some demos to judge the performance.

1. Let me give you an example. The thread pool is like multiple production lines in the factory. When a product needs to be produced, I will take an idle production line for processing. After the production task is reached, the production line will be empty and wait for the next production call. If there is no idle production line, I will wait for the task to be processed again, or add a production line to handle the task. Actually, it cannot be done. Then, a production line will be suspended based on the task priority, priority is given to the tasks that need to be processed now...
2. After a thread in the thread pool is used, it is not used to release its resources, but to make it idle. I did not clearly describe this "increase or decrease". It was an implementation method problem that caused your misunderstanding. Let's talk about it below.
3. How to Implement the thread pool?
If you use a fixed-length array, You need to traverse the array cyclically to find idle and available threads. When multiple Idle threads are requested, you also need to lock this thread resource to ensure thread security and so on... This is an implementation method.
Another implementation method is described in the book. Use an array to store threads that are already in use, and use another array to store Idle threads. Request the idle thread to directly obtain the thread from the idle array and save it to the used array. The threads in the array are used. After the task is completed, it is saved to the idle array. This is what I call the thread "increase or decrease... This is another implementation method.
As for the two methods, the trade-off between efficiency and performance depends on how you apply them.

5. multi-threaded program debugging

Vs provides tools to facilitate debugging of multi-threaded programs. For details, see chapter 1 of the book.

 

I have not thoroughly explained the concepts mentioned above. Please read the book carefully.

Below are some small references:

1. The CPU Hardware mainly depends on its running frequency, which determines its running speed. Therefore, for a single-core CPU with a frequency of GB and a dual-core CPU of GB, because of the running speed of GB, it will run faster than 2g dual-core CPU. In a multi-threaded application environment, the CPU of the 2G dual core may run faster than that of the single core G. This is mainly because the multi-threaded program will lead to frequent CPU switching threads, the speed of a multi-core CPU is faster than that of a single-core CPU. The latest Core I5 quad-core CPU, such as G frequency, has been optimized on the hardware. If you are running a single-threaded program, it will increase the Running frequency to 3.0 or other frequencies, and close other hard cores to improve the running speed. When a program that runs multiple threads, it distributes CPU resources evenly based on the algorithm to speed up program running efficiency...

2. For the compilation of multi-threaded programs, we must use the threads as few as possible, so as to reduce the time and efficiency of CPU scheduling switching to the threads.

3. In addition. in addition to the thread operation processing methods provided by the. NET Framework, there are other 3rd-party solutions. For example, Intel provides 3rd-party components to provide support, for details, refer to the book multi-core programming technology.

4. asp.. Net program running, itself is multi-threaded, so if you can, it is recommended to check the underlying content of this aspect, right.. NET Framework, and how to improve efficiency.

5. Check other books on C # thread operations. Or find some online game code written in C # for reference. These are typical applications of multithreading technology.

 

It's been so fast, and it's just a weekend. Have a good weekend. The rest, the play ......

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.