Thread Management
Thread management is getting easier now. In the. NET architecture, you can get threads from the thread pool. A thread pool is a factory that generates threads, and calls to it are blocked if it has generated a certain number of threads and has not been corrupted. But how do you make sure that there aren't too many threads running within the specified time? After all, if each thread can occupy 100% of a CPU core, then a thread running over the CPU core will only cause the operating system to start the thread time allocation, which will result in context switching and inefficient running. In other words, two threads on the same core do not complete at twice times the length of time, and may take twice times plus 10% or so. Compared to a line threads relative, it may take 3.25-3.5 times times longer for three threads to consume 100% CPU usage on the same core. My experience is that each kernel has multiple threads trying to occupy 100% of the CPU, but none of them can achieve the goal.
So how do you allocate the number of threads that are running?
One way is to create a shared semaphore object between threads. Before the thread starts running, it tries to invoke the semaphore WaitOne mode and releases the semaphore upon completion. Set the semaphore limit on the number of cores in the CPU (using the Environmentprocessorcount feature); This will prevent your system from running at the same time more than the number of cores. In the meantime, pulling threads from the thread pool ensures that you do not create multithreading at the same time. If you create too many threads at a time, even if they are not running, it is a waste of system resources. Because each thread consumes resources. The general pattern for using semaphore is as follows:
static semaphore Threadblocker;
static void Execute (object state)
{threadblocker.waitone ();//do work threadblocker.release ();}
static void Runthreads ()
{threadblocker = new semaphore (0, Environment.processorcount);
for (int x = 0; x <= = x + +)
{ThreadPool.QueueUserWorkItem (new WaitCallback (Execute));}
There are, of course, other ways to solve this problem. Some time ago I wanted to keep a list of objects. Each object represents the complete state of each worker's part. When performed and completed, the worker parts are filled with data. And he will set a feature to indicate that the task is complete. The main thread scans the object manifest and starts running another if the number of threads running is small enough. To be honest, although this method works, it's definitely a nightmare for code and debugging, so I don't recommend it at all.
Data integrity
Overall, in terms of data integrity, you need to worry about competitive conditions and deadlocks. Multiple threads trying to update the same object at the same time can create competitive conditions, which will cause trouble. Imagine using the following code:
int x=5;
x=x+10;