Performing multithreading on. NET two points to be aware of

Source: Internet
Author: User

Thread Management

Thread management is getting easier now. In the. NET architecture, you can get threads from the thread pool. A thread pool is a factory that generates threads, and calls to it are blocked if it has generated a certain number of threads and has not been corrupted. But how do you make sure that there aren't too many threads running within the specified time? After all, if each thread can occupy 100% of a CPU core, then a thread running over the CPU core will only cause the operating system to start the thread time allocation, which will result in context switching and inefficient running. In other words, two threads on the same core do not complete at twice times the length of time, and may take twice times plus 10% or so. Compared to a line threads relative, it may take 3.25-3.5 times times longer for three threads to consume 100% CPU usage on the same core. My experience is that each kernel has multiple threads trying to occupy 100% of the CPU, but none of them can achieve the goal.

So how do you allocate the number of threads that are running?

One way is to create a shared semaphore object between threads. Before the thread starts running, it tries to invoke the semaphore WaitOne mode and releases the semaphore upon completion. Set the semaphore limit on the number of cores in the CPU (using the Environmentprocessorcount feature); This will prevent your system from running at the same time more than the number of cores. In the meantime, pulling threads from the thread pool ensures that you do not create multithreading at the same time. If you create too many threads at a time, even if they are not running, it is a waste of system resources. Because each thread consumes resources. The general pattern for using semaphore is as follows:

static semaphore Threadblocker;
static void Execute (object state)
{threadblocker.waitone ();//do work threadblocker.release ();}
static void Runthreads ()
{threadblocker = new semaphore (0, Environment.processorcount);
for (int x = 0; x <= = x + +)
{ThreadPool.QueueUserWorkItem (new WaitCallback (Execute));}

There are, of course, other ways to solve this problem. Some time ago I wanted to keep a list of objects. Each object represents the complete state of each worker's part. When performed and completed, the worker parts are filled with data. And he will set a feature to indicate that the task is complete. The main thread scans the object manifest and starts running another if the number of threads running is small enough. To be honest, although this method works, it's definitely a nightmare for code and debugging, so I don't recommend it at all.

Data integrity

Overall, in terms of data integrity, you need to worry about competitive conditions and deadlocks. Multiple threads trying to update the same object at the same time can create competitive conditions, which will cause trouble. Imagine using the following code:

int x=5;

x=x+10;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.