. NET synchronization and Asynchronization background (6). net background knowledge

Source: Internet
Author: User

. NET synchronization and Asynchronization background (6). net background knowledge

In the previous five essays, I have already introduced the introduction.. NET class library to implement parallel common methods and their basic usage, of course, these basic usage is far from covering all, can only be used as an introduction here. The first five articles are as follows:

.. NET synchronous and asynchronous encapsulation into Task (5 ).. NET (4 ).. NET (3 ).. NET (2 ).. NET implementation in parallel (1)

 

After that, these five essays all belong to the synchronization and Asynchronization series. Synchronization and Asynchronization: this is a big and general topic. It is difficult for me to introduce it clearly as I have learned it. However, I will try my best. The following are some of the knowledge points (outlines) I intend to introduce recently. If you have better comments or suggestions, please leave a comment.

1. Some background knowledge related to "synchronization and Asynchronization" (that is, the current background)

Thread context, context switching, Race Condition (Race Condition), lock, and deadlock.

 

2. Lock (Lock, Monitor, ReaderWriterLockSlim) (1-2 Articles expected)

 

3. Lightweight locks (Interlocked and SpinLock)

Https://msdn.microsoft.com/zh-cn/library/system.threading.spinlock (v = vs.110). aspx

Https://msdn.microsoft.com/zh-cn/library/system.threading.interlocked (v = vs.110). aspx

 

4. WaitHandle family (expected 2 essays)

Https://msdn.microsoft.com/zh-cn/library/system.threading.waithandle (VS.80). aspx

Https://msdn.microsoft.com/zh-cn/library/system.threading.semaphoreslim (v = vs.110). aspx

 

5. Be cautious with variable capture in the closure (one article is expected)

 

6. A set of thread security (one essay is expected)

 

7. adapt to local conditions-CPU-intensive operations and I/O-intensive operations (two additional articles are expected)

 

 

 

 

Now, the outline is ready for the moment. Next we will go to the background of the essay in terms of body and thread:

 

Thread Context

I will not emphasize the power of threads. It seems impossible for such a powerful thread to be "free" to use it, just as there is a saying: it is always necessary to pay back the mixed-out.

In Windows, a thread is opened and the system allocates a certain amount of memory space by default. One memory space is used to save the value in the CPU register. We call this memory block the thread context.

In addition to the register set, a part of the memory space is called the thread stack. The memory overhead of the thread stack is much higher than that of the register set.

Context switching

A thread is the smallest unit of the program execution flow. It is like a logical CPU (or virtual CPU ). Multiple logical CPUs (threads) in the system share the physical CPU core of the Computer (one CPU may have multiple cores ).

Windows will allocate only one thread to a CPU core at any time. After the allocated thread runs a "time slice" length, the time slice expires. In this case, Windows switches the context.

Every time you switch the context, Windows performs the following operations:

1. Save the value in the CPU register to the current thread context. 2. Select a thread from the existing thread set. 3. Load the context of the selected thread to the CPU register.

After the context switch is complete, the CPU runs the selected thread until its time slice expires, and context switching occurs again. Windows switches the thread context once in about 30 milliseconds.

 

In fact, the impact of context switching on performance may exceed your imagination:

When a thread is running, the code and data required by the thread are stored in the high-speed cache of the CPU, this makes it unnecessary for the CPU to frequently access the Memory RAM (the access speed is much slower than the access speed to the cache ).

The code and data required for Windows context switching and thread running may not be in the high-speed cache of the CPU. Therefore, the CPU must Access RAM to fill the high-speed cache to resume high-speed running of the CPU.

However, after about 30 milliseconds, context switching occurs again.

In addition, when a time slice ends, if Widnows decides to continue running the same thread, context switching will not occur.

The thread can also terminate its time slice, saving time and CPU can be used to run other threads.

For details about the Thread. Sleep method, here are three special parameters-1, 0, 1 millisecond.

-1. The current thread will always sleep (this parameter is meaningless. Although Windows does not schedule this thread, it still occupies memory ).

0. The current thread abandoned the remaining time slice, prompting Windows to switch context. Of course, at this time, Windows may continue to run the currently dormant thread (when there is no thread with the same or higher priority needs to run)

1. Sleep (0) does not allow threads with lower priority to run, while Sleep (1) forces a context switch.

 

Competitive conditions

Assume that the following line of code assumes that the I value of the integer variable is 0:

i = i + 1;

When two threads run this line of code at the same time, the value of variable I is changed to 1. At this time, the result is inconsistent with the expected value 2. At this time, we think that the competition has caused security problems.

This problem occurs because multiple threads simultaneously access shared resources.

Code with competitive conditions is considered as a thread-Insecure code.

 

Locks and deadlocks

A typical solution to solve the competition condition is to obtain the exclusive access privilege of shared resources. The process of obtaining this privilege is called locking. This privilege is a lock.

However, the locks and release locks have a great deal of initial sales and performance impact. There is also a better solution: Atomic operations, which I sometimes call lightweight locks.

When the two locks wait for each other to release, we think there is a deadlock.

 

 

 

Coming soon, next article: Lock (Lock, Monitor, ReaderWriterLockSlim) (1-2 Articles expected)

Appendix, Demo: http://files.cnblogs.com/files/08shiyan/ParallelDemo.zip

See more: Casual guide: synchronous and asynchronous


(To be continued ...)

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.