Introduction to multithreading and parallel computing under. NET (I) preface: Introduction to. net

Source: Internet
Author: User

Introduction to multithreading and parallel computing under. NET (I) preface: Introduction to. net

As an ASP. NET developers, there are not many opportunities to access multi-threaded programming in their previous development experiences,. the release of NET 4.0 is approaching, and I feel that parallel computing will be very useful in the next 1-2 years. So I decided to summarize the multi-threaded programming under. NET 3.5 by writing logs, and then introduce the new parallel library provided by. NET 4.0, as well as the new parallel programming mode and the way of thinking.

I personally think that in daily programming, there are not many ASP. NET programmers who use multithreading. In fact, we are always enjoying the advantages of multithreading. First, the WEB server environment is a multi-threaded environment, and each request is an independent thread. Without multithreading, it is hard to imagine what the WEB server can only process one request synchronously. Similarly, our database should also be a multi-threaded environment. For programmers of Windows applications, I am afraid it is difficult to avoid multithreading. The simplest thing is that we will start a new thread to perform some time-consuming operations, so that we can avoid the UI from stopping the response, after the operation, apply the operation result to the control of the main thread. Although such an application is multi-threaded, and even many programmers are used to opening a new thread for any operations, I think the thinking of such multi-threaded applications is still in the single-core era. In the multi-core era, we can indeed make the actual parallel execution of tasks rather than seemingly parallel execution. Best Choice for Web Front-end framework!

First, let's talk about the concept. There is no need to talk about the basic concepts of processes and threads. Naturally, we can understand that a process contains at least one thread. By enabling multiple threads in a process, we can make a program seem to be able to do multiple tasks at the same time. For example, we can perform some calculations when receiving User responses. In the past, the processor often had only one core, that is to say, at the same time, the processor can only do one thing. So how can we implement the simultaneous execution of multiple threads mentioned earlier. In fact, it seems that at the same time, essentially multiple threads occupy several time slices of the processor in turn, and everyone uses their resources in turn, because this time slice is very short, so in a long time it seems that several threads have been executed at the same time.

For a vivid example, we often see that some painters can draw two different pictures at the same time on a canvas, one painting person and one house, and finally finish the painting together. But if I read it carefully, I found that he had two paint brushes with both hands. Here I drew one pen and one pen at the same time. This painter should also be single-core like ordinary people, but thread switching is faster. I often chat with netizens on the phone and do two things at the same time. However, this is very troublesome. Before typing, I want to recall the chat content, enter the chat text, and then think back to what the buddy said just now. Return to him on the phone. The work of this kind of recall is to prepare the thread context and hand it over to the brain for processing. Although I have done two things at the same time, it is a waste of time to prepare the context. If my colleagues who are on the phone and chatting are doing the third thing, such as watching a movie, I guess it will not work. Therefore, threads cannot be opened much, especially for the human brain. But it is different for the computer processor. You only need to prepare data and execute commands. As for these things, it does not care about a few things, it's okay to run the command 24 hours a second without wasting it. Of course you can keep it idle. Best Choice for Web Front-end framework!

You may think that since thread switching takes time, isn't it faster to start two threads to execute two tasks one by one? In fact, even single-core processors are not necessarily used, because in actual applications, our tasks often cannot occupy processor resources from the beginning to the end, in many cases, we have to wait for the IO response or user response. If it is just a thread, the processor is too idle. For a multi-core processor, in theory, commands can be executed concurrently on each processor at the same time, so we need to use multiple threads to increase the computing speed. Of course, it does not mean that a task will take 10 seconds to execute. It takes only five seconds to execute this task in parallel on a dual-core machine, this is because this task is difficult to divide into two branches for parallel execution. If each command depends on the execution result of the previous command, it is difficult to execute such operations in parallel on multiple processors. However, we can think like this: if there are at least two such tasks, we can fully utilize the advantages of multiple processors for parallel execution.

However, not many threads can be opened at will. By default, each thread occupies 1 MB of stack space (for common applications ), on a 32-bit Windows platform, the maximum number of programs used by a user process is 2 GB. That is to say, the number of threads used in the program cannot exceed 2000, in actual tests, we can find that an exception with insufficient memory will be received when about 1930 threads are opened. In fact, this number is absolutely sufficient, even complex Outlook2007 programs generally only use less than 50 threads (which can be observed in the task manager ).

There is also a reason why many people do not want to use multithreading as a last resort. First, due to the complexity of multi-threaded programming, we are also used to the programming mode of one line of code execution, for a good multi-threaded program, try to split the task so that it can be used in multiple threads to take advantage of multiple processor cores. In addition, if multiple threads use the same resource, the resource lock should be considered to avoid data inconsistency. The concept of lock/transaction/concurrency is also very common in databases. Second, debugging is difficult. In particular, the execution of a thread depends on the execution of other threads. Third, the performance of multi-threaded programs may not be the same as the environment changes (processor/operating system, programming only for a certain environment may not take full advantage of the advantages of multi-processor. For example, if we divide a task into two threads for parallel execution, will it be more reasonable for a quad-core processor to divide it into four threads for parallel execution, to tell the truth, is it hard to say this? Also, our programming is based on. NET framework, and its essence is still the use of operating system threads. Many processes are running in the operating system. The processor is the processor of everyone and is not exclusively used by our programs, in such a mixed environment, it is hard to say whether our program will behave as we expected. Best Choice for Web Front-end framework!

Multithreading is good and multithreading is difficult. This series of articles can only talk about how to implement multithreading at a simple level.. NET Framework for multi-threaded programming, as well as some common applications (such as Windows applications) typical applications of multithreading. I hope to help you.


Can C # multi-thread be used for Parallel Computing (on a computer )?

Yes, provided that the single-threaded program is not atomic, that is to say, each step can be independent and can be parallel, regardless of the order.
For example, to call 10 other methods in a method, you can enable 10 threads to execute these 10 Delegate Tasks cyclically. If it is a statement, you can wrap the statement block into a private Method for delegated calling, and then use WaitAll in the main thread to wait to see how long it takes to execute these threads.

Graduation project I want to elaborate on the development of parallel computing

The development history of modern computers since 1940s can be divided into two major development periods: the era of serial computing and the era of parallel computing. Every computing age begins with the development of architecture, followed by system software (especially compiler and operating system) and application software. Finally, it reaches its peak with the development of the problem solving environment.

A parallel computer is composed of a group of processing units. These processing units work together to complete a large-scale computing task at a faster speed through communication and collaboration between each other. Therefore, the two main components of Parallel computers are the communication and collaboration between computing nodes and nodes. The development of parallel computer architecture is mainly reflected in the performance improvement of computing nodes and the improvement of inter-node communication technology.

Node performance continues to improve

In the early 1960s s, with the advent of transistors and core memory, the processing unit became smaller and the memory became smaller and cheaper. The results of these technological developments have led to the emergence of Parallel computers. Parallel Computers in this period were mostly small-sized shared storage multi-processor systems, known as large hosts. IBM 360 is a typical representative of this period.

By the end of 1960s, the same processor began to set multiple functional units with the same function, and the pipeline technology also emerged. Compared with simply increasing the clock frequency, these parallel features greatly improve the performance of parallel computer systems in the application of the processor. At this time, Illinois University and Burroughs began to implement the Illiac IV program to develop a 64-cpu simd host system, it involves many research topics including hardware technology, architecture, I/O devices, operating systems, programming languages, and applications. However, when a scaled-down prototype system (with only 16 CPUs) was launched in 1975, the entire computer industry had changed dramatically.

First, the concept of the storage system is innovated, and the idea of virtual storage and cache is put forward. Take IBM 360/85 and IBM 360/91 as examples. The two models belong to the same series. The clock speed of IBM 360/91 is higher than that of IBM 360/85, and the selected memory speed is faster, the command line for dynamic scheduling is also adopted. However, the overall performance of IBM 360/85 is higher than IBM 360/91, the only reason is that the former uses the cache technology, while the latter does not.

Second, semiconductor memory begins to replace core memory. Initially, semiconductor memory was used only as a cache on some machines, while the tps7600 was the first to fully adopt this smaller, faster, and directly addressable semiconductor memory, core Memory has now exited the stage of history. At the same time, the integrated circuit also appeared and was quickly applied to computers. The two revolutionary breakthroughs in the component technology make the design of Illiac IV inferior in terms of underlying hardware and parallel architecture.

High-speed processor development

Since the advent of Cray-1 in 1976, vector computer has been firmly controlling the entire high-performance computer market for 15 years. The Cray-1 has carefully designed the logic circuit used, and adopted the simplified instruction set which we now call it. We also introduced the vector register to complete the vector operation. The use of this series of technical means, the clock speed of the Cray-1 has reached 80 MHz.

The performance of a microprocessor is also significantly improved as the machine's word length increases from 4-bit, 8-bit, and 16-bit to 32-bit. It is precisely because we have seen this potential of the microprocessor, carnegie Mellon University began to develop a shared storage system consisting of 16 PDP-11/40 processors and 16 shared memory modules on the basis of the popular small computer of DEC PDP-11. processor System C. mmp.

Since 1980s, the microprocessor technology has been advancing fast. Later, a bus protocol that is very suitable for SMP mode was introduced. Berkeley University of California expanded the bus protocol and proposed a solution to the Cache consistency problem. Since then, the path to shared storage and multi-processor created by C. mmp has become wider and wider. Now, this architecture has basically dominated the server and desktop workstation market.

... The remaining full text>

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.