Multi-thread and multi-thread (
This article is followed by the previous article to continue to learn multithreading.
Thread Synchronization
In most cases, threads in the computer run concurrently. Some threads are independent and run independently. Such threads are called irrelevant threads. However, there are also some threads that need to pass results between them and share resources. Threads like this are called related threads. For example, when we watch a movie online, one thread downloads the movie and one thread plays the movie. Only when they work together can we watch movies and share resources between them. From this, we can see that the thread correlation is reflected in the access to the same resource. The code used to access the Critical source is called the Critical Region ). Let's look at a program:
// Buffer, which can only contain one character private static char buffer; static void Main (string [] args) {// Thread: writer Thread writer = new Thread (delegate () {string sentence = "I can't help myself. I seem to have known me, And I am wandering in xiaoyuan's fragrance path. "; For (int I = 0; I <24; I ++) {buffer = sentence [I]; // write the character Thread to the buffer. sleep (25) ;}}); // Thread: Reader Thread Reader = new Thread (delegate () {for (int I = 0; I <24; I ++) {char ch = buffer; // read data from the cache Console. write (ch); Thread. sleep (25) ;}}); // start the writer thread. start (); Reader. start ();}}
We create two threads, one Writer thread is responsible for writing characters to the cache, and one Reader thread is responsible for reading characters from the cache. We assume that the cache area can only store one character at a time. That is to say, if the Reader cannot read characters from the cache in time, it will be overwritten by the characters to be written by Writer next time. Let's take a look at the running effect of the program, such:
// Buffer, which can only accommodate one character private static char buffer; // The number of identifiers (used space in the buffer, initial value: 0) private static long numberofUsedSpace = 0;
Static void Main (string [] args) {string sentence = "I can't help myself. I seem to have known each other, But xiaoyuan's fragrance path lingers. "; // Thread: writer Thread writer = new Thread (delegate () {for (int I = 0; I <24; I ++) {// check whether the buffer is full before writing the program. // if it is full, wait. If it is not full, write characters. while (Interlocked. read (ref numberofUsedSpace) = 1) {Thread. sleep (10);} buffer = sentence [I]; // write the character Thread to the buffer. sleep (25); // After writing data, change numberofUsedSpace from 0 to 1 Interlocked. increment (ref numberofUsedSpace) ;}}); // Thread: Reader Thread Reader = new Thread (delegate () {for (int I = 0; I <24; I ++) {// check whether the buffer zone is full before reading. // if it is full, read it. If it is not full, wait. While (Interlocked. read (ref numberofUsedSpace) = 0) {Thread. sleep (25);} char ch = buffer; // read data from the cache Console. write (ch); // after reading the characters, set numberofUsedSpace from 1 to 0 Interlocked. decrement (ref numberofUsedSpace) ;}}); // start the writer thread. start (); Reader. start ();}
We use a numerofUsedSpace variable as the counter. Assume that numberofUsedSpace = 1 is full and numberofUsedSpace = 0 is not full. Every time the Writer thread writes characters to the cache, it needs to use the Interlocked Read method to check whether numberofUsedSpace is full. If it is not full, the suction character. If it is full, wait. Similarly, when the Read thread needs to Read characters from the buffer, it also checks whether numberofUsedSpace is full or full through the Interlocked Rread method. If it is not full, wait.
Manager (Monitor class)
Another method for implementing thread synchronization is through the Monitor class. Program Viewing:
// Buffer, which can only contain one character private static char buffer; // The object used for synchronization (exclusive lock) private static object lockForBuffer = new object (); static void Main (string [] args) {// Thread: writer Thread writer = new Thread (delegate () {string sentence = "I can't help but get lost, xiaoyuan Xiangwei Linglong. "; For (int I = 0; I <24; I ++) {try {// enter the critical section Monitor. enter (lockForBuffer); buffer = sentence [I]; // write characters to the buffer // wake up the thread Monitor sleeping on critical resources. pulse (lockForBuffer); // sleep the current thread on the critical resource and Monitor it. wait (lockForBuffer);} catch (ThreadInterruptedException) {Console. writeLine ("thread writer aborted");} finally {// introduces the critical area Monitor. exit (lockForBuffer) ;}}); // Thread: Reader Thread Reader = new Thread (delegate () {for (int I = 0; I <24; I ++) {try {// enter the critical section Monitor. enter (lockForBuffer); char ch = buffer; // read data from the cache Console. write (ch); // wake up the thread Monitor sleeping on the critical resource. pulse (lockForBuffer); // Let the current thread sleep on the critical resource Monitor. wait (lockForBuffer);} catch (ThreadInterruptedException) {Console. writeLine ("thread reader aborted");} finally {// exit critical area Monitor. exit (lockForBuffer) ;}}); // start the writer thread. start (); Reader. start ();}
When a thread enters the critical section, it will call the Entry method of Monitor to obtain the exclusive lock. If the thread gets the lock, it will perform the operation. If it is occupied by another thread, it will sleep on the critical resource, until the exclusive lock is released. If other threads enter the critical section, they will find that the exclusive lock is occupied and they will sleep on the critical resource. Monitor records which threads are sleeping on critical resources. When the thread completes the operation, it calls the Pulse () method to wake up the thread sleeping on critical resources. Because the thread still needs to perform the next operation, you need to call the Wait () method to sleep yourself on critical resources. Finally, the exclusive lock is released by calling the Exit () method.
Note that: Monitor can only lock variables of the reference type if value type variables are used. Each time the Entry () method is called, a packing operation is performed. Each packing operation gets a new object. If the same operation is performed on different objects, the synchronization effect is not allowed. To ensure that the critical resources are released when the critical zone is exited, we should put the code of the Monitor class into the Try statement and put the Exit () method called into the finally statement. For convenience, C # provides more concise statements.
Lock (object to be locked) {// code of the critical section ..... ......}
After the lock statement is executed, the Exit () method is automatically called to release the resource. It is equivalent:
Try {Monitor. Entry (object to be locked); // code of the critical section. ..... ...... ......} Finally {Monitor. Exit (object to be locked );}
When a thread accesses resources in an exclusive lock mode, other threads cannot access the resource. Other threads can access the lock statement only after the lock statement ends. In a way, the lock statement is equivalent to the tentative multi-thread function of the program. This is equivalent to putting a lock on the resource, and other threads can only be tentative, which will greatly reduce the efficiency of the program. Therefore, you can set an exclusive lock only when necessary. (Let's look back at Interlocked. When the Interlocked Read () method is used to Read the counter, if it does not meet the conditions, it will wait and the thread status will change to SleepWaitJoin. However, the Entry () method of Monitor obtains the exclusive lock. If no exclusive lock is obtained, the thread is aborted and the status changes to Stopped. This is a difference between the two ).
Mutex)
In the operating system, threads often need to share resources, and these resources often require exclusive use. That is, it can only be used by one thread at a time. This exclusive resource is called Mutual Exclusion between threads ). Mutual exclusion also serves the purpose of thread synchronization from a certain angle, so mutual exclusion is a special synchronization. Similar to Monitor, only threads that obtain the Mutex object ownership can enter the critical section. threads that do not obtain the Mutex object ownership can only wait outside the critical section. Using Mutex consumes more resources than using Monitor. But it can implement thread synchronization between different programs in the system.
Mutual Exclusion is divided into local mutual exclusion and system mutual exclusion. As the name suggests, local mutex is only valid in the created program. The system is mutually exclusive and will be valid throughout the system.
Program Viewing:
Static void Main (string [] args) {Thread threadA = new Thread (delegate () {// create Mutex fileMutex = new Mutex (false, "MutexForTimeRecordFile "); string fileName = @ "E: \ TimeRecord.txt"; for (int I = 1; I <= 10; I ++) {try {// request the ownership of the mutex, if the operation succeeds, the system enters the critical section. If the operation fails, the system waits for fileMutex. waitOne (); // operate critical resources in the critical section, that is, write data to the File. appendAllText (fileName, "threadA:" + DateTime. now + "\ r \ n");} catch (ThreadInterruptedExceptio N) {Console. WriteLine ("thread A is interrupted. ") ;}Finally {fileMutex. ReleaseMutex (); // release the ownership of the mutex} Thread. Sleep (1000) ;}}; threadA. Start ();}
Static void Main (string [] args) {Thread threadB = new Thread (delegate () {// create Mutex fileMutex = new Mutex (false, "MutexForTimeRecordFile "); string fileName = @ "E: \ TimeRecord.txt"; for (int I = 1; I <= 10; I ++) {try {// request the ownership of the mutex, if the operation succeeds, the system enters the critical section. If the operation fails, the system waits. FileMutex. waitOne (); // operate critical resources in the critical section, that is, write data to the File. appendAllText (fileName, "ThreadB:" + DateTime. now + "\ r \ n");} catch (ThreadInterruptedException) {Console. writeLine ("thread B is interrupted. ");} finally {fileMutex. releaseMutex (); // release the ownership of the mutex} Thread. sleep (1000) ;}}); threadB. start (); Process. start ("MutecA.exe"); // The Startup Program mutexa.exe}
These are two programs. We have created a system mutex ("MutexForTimeRecordFile", which is valid throughout the system and can be cross-program ). There is code for executing program A in program B. After compilation, we put the two compiled executable files together and execute B. The effect is as follows:
We can see that two different programs achieve cross-program thread synchronization through the same system mutex name.
Let's summarize the three classes that C # brings to us to achieve multi-threaded synchronization.
1. Interlocked class
The Read () method is called to Read the counter and determine the counter to achieve thread synchronization.
2. Monitor class
Call the Entry () method to obtain the exclusive lock. After the code is executed, call the Pulse () method to wake up the thread sleeping on the critical resource, and call the Wait () method by yourself, sleep at critical resources for next access to critical areas. The difference between Monitor and Interlocked is that when the Monitor. Entry () method is called without obtaining an exclusive lock, the thread status changes to Stopped. Interlocked puts the thread in the SleepWaitJoin state.
3. Mutex class
Call Mutex's WaitOne () method to obtain the ownership. The thread that obtains the ownership can enter the critical section, and the thread that does not have the permission to wait outside the critical section. Mutex objects consume more resources than Monitor objects, but Mutex objects can be synchronized across programs.
Mutex is divided into local Mutex and system Mutex.
Java multithreading provides comprehensive methods and functions that can be annotated. It is best to have examples,
Java Memory Model
Different platforms have different memory models, but the jvm memory model specifications are uniform. Java's multi-thread concurrency problems will eventually be reflected in the java memory model. The so-called thread security is nothing more than controlling the orderly access or modification of a resource by multiple threads. The java memory model has two major problems to solve: visibility and orderliness. We all know that a computer has a high-speed cache, and not every time the processor processes data, the memory is used. JVM defines its own memory model and shields the memory management details of the underlying platform. For java developers, the solution is to solve the problem of multi-thread visibility and orderliness Based on the jvm memory model.
So what is visibility? Multiple Threads cannot communicate with each other. Communication between them can only be performed through shared variables. The Java Memory Model (JMM) specifies that the jvm has the primary memory, which is shared by multiple threads. When a new object is created, it is also allocated to the primary memory. Each thread has its own working memory, which stores copies of some objects in the primary memory, of course, the thread's working memory size is limited. When a thread operates an object, the execution sequence is as follows:
(1) copy the variable from the primary memory to the current working memory (read and load)
(2) execute the code and change the value of shared variables (use and assign)
(3) use the working memory data to refresh the content related to the primary storage (store and write) JVM specification to define the thread's operation commands for the primary storage: read, load, use, assign, store, write. When a shared variable has copies in the working memory of multiple threads, if a thread modifies the shared variable, other threads should be able to see the modified value, this is the visibility of multithreading.
So what is orderliness? When a thread references a variable, it cannot be referenced directly from the main memory. If the variable does not exist in the thread's working memory, a copy is copied from the main memory to the working memory, this process is read-load, and the thread will reference this copy after completion. When the same thread references this field again, it is possible to obtain a copy of the variable from the primary storage (read-load-use) Again, or directly reference the original copy (use ), that is to say, the order of read, load, and use can be determined by the JVM implementation system.
The thread cannot directly assign values to fields in the primary memory. It will assign the values to the variable copy (assign) in the working memory ), after the completion, the copy of the variable will be synchronized to the primary storage area (store-write). As to when the synchronization will pass, it is determined by the JVM implementation system. with this field, the field is assigned to the working memory from the main memory. This process is read-load, and the thread will reference this copy of the variable after completion, when the same thread repeatedly assigns values to fields, for example:
For (int I = 0; I <10; I ++)
A ++;
The thread may only assign values to copies in the working memory and synchronize them to the primary storage zone only after the last assignment. Therefore, the order of assign, store, and weite can be determined by the JVM implementation system. Suppose there is a shared variable x, thread a executes x = x + 1. From the above description, we can know that x = x + 1 is not an atomic operation, and its execution process is as follows:
1. Read the copy of variable x from the primary memory to the working memory.
2 Add 1 to x
3. Write the value after adding x and 1 back to the primary storage.
If another thread B executes x = X-1, the execution process is as follows:
1. Read the copy of variable x from the primary memory to the working memory.
2 minus 1 for x
3. Write the value after x minus 1 to the primary storage.
Obviously, the final x value is unreliable. Assume that x is now 10, thread a is added to 1, and thread B is reduced to 1. On the surface, it seems that the final x is still 10, but this may happen in the case of multithreading:
1: thread a reads x copies from the primary memory to the working memory. The x value in the working memory is 10.
2: thread B reads x copies from the primary memory to the working memory. The x value in the working memory is 10.
3: thread a adds x in the working memory to 1, and the x value in the working memory is 11.
4: thread a will... the remaining full text>
In C language?
I would like to recommend some good tutorials for you. You may need to use: C ++ Builder multi-thread programming technology: www.it55.com/...5.html to compile multi-thread program instances with MFC: the web server program (multithreading) Written in www.it55.com/..7.html C ++ is a multi-threaded instance tutorial.