Concurrent Programming 101

Source: Internet
Author: User

Asparallel () is not suitable for I/O wait typeProgram, The default parallel row is Min (number of CPUs, 64)

 

Concurrentqueue implements thread security internally. Obviously, when concurrent write or read operations are performed, performance loss may occur in coordination. It can be considered that such time cannot affect the demand.

 

Queue problems:

IfCapcity not inputAnd thenExtremely fast(Multi-thread concurrent queue entry) the queue entry element will throw an exception of insufficient length.

When the compiler is under release:

Unprocessed exception: system. aggresponexception: one or more errors occur. ---> System. ARG
Umentexception: the length of the target array is insufficient. Check the destindex, length, and lower limit of the array.
In system. array. Copy (array sourcearray, int32 sourceindex, array destinationa
Rray, int32 destinationindex, int32 length, Boolean reliable)
In system. Collections. Generic. Queue '1. setcapacity (int32 capacity)
In system. Collections. Generic. Queue '1. enqueue (T item)

 

Best practices
  • All code you write shoshould rely only on the guarantees made by The ecma c # specification, and not on any of the Implementation Details explained in this article.
  • Avoid unnecessary use of volatile fields. most of the time, locks or concurrent collections (system. collections. concurrent. *) are more appropriate for exchanging data between threads. in some cases, volatile fields can be used to optimize concurrent code, but you should use performance measurements to validate that the benefit outweighs the extra complexity.
  • Instead of implementing the lazy initialization Pattern yourself using a volatile field, use the system. Lazy <t> and system. Threading. lazyinitializer types.
  • Avoid polling loops. Often, you can use a blockingcollection <t>, monitor. Wait/pulse, events or asynchronous programming instead of a polling loop.
  • Whenever possible, use the standard. Net concurrency primitives instead of implementing equivalent functionality yourself.

 

 

Barrier class

Enables multiple tasks to adopt parallel mode based on a certainAlgorithmWork collaboratively in multiple stages.

Typical volatile application scenarios

Class foo
{
Int _ answer;
Bool _ complete;
Void ()
{
_ Answer = 123;
_ Complete = true;
}
Void B ()
{
If (_ complete) console. writeline (_ Answer );
}
}

If Methods A and B ran concurrently on different threads, might it be possible for B to write "0 "? The answer is yes-
The following reasons:
· The compiler, CLR, or CPU may reorder your program's instructions to improve efficiency.
· The compiler, CLR, or CPU may introduce caching optimizations such that assignments to variables won't be
Visible to other threads right away.

 

The culprit is the optimization of the compiler, so write additionalCodeCancel optimization.

You must explicitly defeat these
Optimizations by creating memory barriers (also called memory fences) to limit the effects

1. Instruction reordering
2. read/write caching.

Solution:

Full fences
The simplest kind of memory barrier is a full memory barrier (Full fence) which prevents any kind

Instruction reordering

Or caching

Around that fence. Calling thread. memorybarrier generates a Full fence; we can fix our
Example by Applying Four full fences as follows:
Class foo
{
Int _ answer;
Bool _ complete;
Void ()
{
_ Answer = 123;
Thread. memorybarrier (); // barrier 1
_ Complete = true;
Thread. memorybarrier (); // barrier 2
}
Void B ()
{
Thread. memorybarrier (); // barrier 3
If (_ complete)
{
Thread. memorybarrier (); // barrier 4
Console. writeline (_ Answer );
}
}
}

Remarks

Memorybarrier is required only in weakly ordered multi-processor systems (for example, systems using multiple intel itanium processors.

In most cases, the lock statement in C #, The synclock statement in Visual Basic, or the monitor class provide a simpler way to synchronize data.

 

Time consumed

Remember that acquiring and releasing an uncontended
Lock takes as little as 20ns on a 2010-era desktop.

 

A Full fence takes around und ten nanoseconds on a 2010-era desktop.

 

Bind the thread to the CPU for execution

By calling setthreadaffinitymask, you can set affinity shielding for each thread:
Dword_ptr setthreadaffinitymask (
Handle hthread, // handle to thread
Dword_ptr dwthreadaffinitymask // thread affinity mask
);

 

Thread. Yield ()

Insert code anywhere. If a program error occurs, it indicates that the program has a bug.

Thread switching time in Windows

Under Windows, a time-slice is typically in the TENs-ofmilliseconds
Region-much larger than the CPU overhead in
Actually switching context between one thread and another (which
Is typically in the few-microseconds region ).

Threads share heap memory to facilitate data interaction

Threads share (HEAP) memory with other threads running in the same application.

Use lambda expressions with caution in case of Asynchronization!

Eg1:

Problem:

For (INT I = 0; I <10; I ++)
New thread () => console. Write (I). Start ();
The output is Nondeterministic! Here's a typical result:
0223557799

Solution:

For (INT I = 0; I <10; I ++)
{
Int temp = I;
New thread () => console. Write (temp). Start ();
}

Eg2:

String text = "T1 ";
Thread T1 = new thread () => console. writeline (text ));
TEXT = "T2 ";
Thread t2 = new thread () => console. writeline (text ));
T1.start ();
T2.start ();
Because both lambda expressions capture the same text variable, T2 is printed twice:
T2
T2

Setting the thread name may help debugging.

The front-end/back-end status of the thread, which has nothing to do with its priority or assigned execution time.

If it is not enough to raise the thread priority, you need to raise the process priority.

Data sharing between processes can be exploited

Communicating via remoting or memory-mapped Files

(Memory-mapped Files) http://msdn.microsoft.com/zh-cn/library/ms810613.aspx

 

Differences between kernel mode and user mode in the operating system

In order not to allow the program to access resources at will, most CPU architectures support the kernel mode and user mode execution modes. When the CPU runs in the kernel mode, the task can execute privileged-level commands, have full access to any I/O device, and can access any virtual address and control the virtual memory hardware; this mode corresponds to the ring0 layer of X86. The core part of the operating system, including the device drivers, is running in this mode. When the CPU runs in user mode, the hardware prevents the execution of privileged commands and checks access operations in the memory and I/O space, if the Running code cannot pass through some gate mechanism of the operating system, it cannot enter the kernel mode. This mode corresponds to the x86

At the ringlayer 3, the user interface of the operating system and all user applications run at this level.

 

The condition variable pattern

When an expression is complex, the evaluation of the expression is not atomic.

Which of the following is more efficient for using memory ing files and only file stream buffering?

The lock is used to write files in openvas...

 

Msdn warning for setting the minimum thread in the thread pool

you can use the setminthreads method to increase the minimum number of threads. However, adding these values without any need may cause performance problems. If too many tasks are started at the same time, the processing speed of all tasks may seem slow. In most cases, the thread pool can work better by using its own thread allocation algorithm. Reducing the minimum number of Idle threads to a smaller value than the number of processors also affects performance.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.