CLR thorough parsing: threading Management in the CLR

Source: Internet
Author: User
Tags resource thread

This column is based on a pre-release version of the CLR thread system and the task Parallel library. All information is subject to change.

The current technological change from a single nuclear architecture to a multicore architecture offers many benefits. For example, in a thread environment, if you use multiple threads effectively, you can improve performance by using multiple cores and parallelism, such as asp.net applications that use multithreading to make multiple independent queries to a database.

However, the use of multiple cores presents some new problems. You may see programming and synchronization models become more complex, you need to control concurrency, and tuning and optimizing performance will be more difficult. In addition, we don't know much about the many new factors that affect the performance and behavior of concurrent applications. As a result, optimizing performance becomes challenging, especially if the optimization goal is not a particular application but is geared towards all applications.

Threads in the CLR are an example of a concurrency environment, many of which, such as those introduced by multi-core architectures, can affect concurrent behavior and performance. Lock contention, cache contention, and excessive context switching are just a few of these factors. The current work on CLR ThreadPool is designed to make it easier for developers to leverage concurrency and parallelism to achieve higher concurrency levels and better performance by integrating parallel frameworks into the runtime and handling some of the most common problems internally.

In this column, we'll look at some common problems and important considerations for developers in optimizing multithreaded managed code, especially in managed code in multiprocessor hardware, and we'll explain how ThreadPool's future changes will address these issues, And to help ease the workload of programmers, we'll also discuss some of the key aspects of concurrency control to help you understand how your application behaves in a highly multi-threaded environment. We introduce the following to assume that you are familiar with the basic concepts of concurrency, synchronization, and threading in the CLR.

Common problems with concurrency

Most software is designed for single-threaded execution. Part of the reason is that a single-threaded programming model can reduce complexity and be easier to encode. Existing single-threaded applications have very poor adaptability to multicore transformation and cannot use multiple cores. To make matters worse, it is not easy to adapt a single-threaded model to a multicore environment, many of which are predictable, such as lock contention and resource contention, contention conditions, and resource shortages. Other reasons are not yet clear. For example, how do you determine the optimal concurrency level (number of threads) for a particular type (and size) workload at any given time?

Think about the thread system in the CLR that is designed to run on a single-core computer: Tasks (work items) are queued one at a time, forming a list that is protected by a lock. Each thread then occupies the items in the queue sequentially. When migrating the thread system to a multi-core architecture, the performance of the lock is reduced if there is a large number of contention for the higher concurrency level. The interesting thing is that in October 2008, Kenny Kerr published the article "Exploring High-performance Algorithms" in the "using C + + for Windows development" column, in which he said "well-designed algorithms on a single processor can often outperform inefficient implementations on multiprocessor", which means adding only a few more The manager does not always improve performance.

In the latter part of this article, we refer to the existing threading system collectively as ThreadPool, which collectively describes the concept of work items as items in the task queue (such as QueueUserWorkItem). In a single core computer, you can manage concurrent operations in a detailed task schedule by allocating the processor time between threads. Now that there are multiple cores, you must also consider how to fairly distribute work among the cores, consider the underlying memory hierarchy to ensure correctness and performance, and decide how to control and take advantage of the higher concurrency levels.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.