Windows multi-thread solution with half the effort, but resource competition must be avoided

Source: Internet
Author: User

 [It168]The wince system is a multi-task operating system. Generally, multi-tasking is implemented through multithreading and multi-process mechanism. Therefore, to master the techniques and methods of multi-process and multi-thread, it is necessary to compile complicated wince embedded programs. Because the multi-threaded programming method can effectively solve a variety of parallel complex work tasks, it makes some particularly complex overall design and solutions more concise and clearer.

Recently, I have experienced the painful lessons of ignoring multithreading in an embedded development project. Since multithreading is not well processed during development, the result is that threads often compete for resources, making the system slow or paralyzed, these imperceptible multi-threaded error handling forces the project team members to spend a lot of extra time processing. Not only is the project development progress seriously lagging behind, but it also affects the product release time. In wince programming, multithreading is not as simple as the surface looks. This article shares some of my experiences and lessons on multithreading in this project.

  I. The two most common types of resource competition in multithreading

There are two programming methods for developing an embedded system: kernel-mode programming and user-Mode Programming. When a process is called by the system and is executed by the kernel, it is called that the process is in the kernel state, and the CPU is in the kernel Execution Process with the highest privilege level. When a process executes your own code, it is called in the user State. At this time, the CPU runs in the user code with the lowest privilege level. In kernel mode, the CPU can execute any command. In user mode, the CPU can only execute non-privileged commands. In user mode, the program cannot operate the kernel address space at will. Therefore, multi-threaded resource conflicts are more common in kernel programming than in user-State programming.

In this embedded project, because the wince kernel is customized and tailored by our project team, many project members want to make program execution more efficient and code less, not only does the process often fall into the kernel running state for execution, but it also overemphasizes the use of multithreading to improve the execution efficiency. The result is not only that the program execution efficiency is not improved, but that resource competition often occurs in the wince system. When it is light, it runs slowly, and when it is heavy, it deadlocks and crashes. We sum up the lessons learned from this development failure: multi-threaded programming can get twice the result with half the effort, but we must pay attention to the security of multi-threaded resource competition. This is a question of how developers can balance the two.

(1) multi-thread resource conflicts

The wince system supports multithreading, which is undoubtedly a good thing. However, this also brings about another tricky design problem, that is, how to ensure the security of multi-thread resource competition is very important. In general, the multi-threaded resource security refers to the reliability of the execution results when a thread is called and is called again by other threads before it is returned. If the result is reliable, the multi-thread is safe. If the result is unreliable, the multi-thread is insecure because of a multi-thread resource conflict. Therefore, if the security of multi-threaded resources is improperly handled, it will lead to errors during the running of the program, and serious problems will lead to system crash.

(2) multi-thread deadlock

From the first point above, we can see that it is extremely important to prevent thread resource conflicts in wince multi-thread programming. Therefore, during the development of this project, a large number of resources are locked to prevent any other threads from accessing this resource to avoid resource competition. But I didn't think of it. This leads to another common problem of multi-threaded Resource Competition: deadlock. Results During project testing, the most common phenomenon we encountered was the deadlock caused by improper multi-thread competition. Later, we needed to analyze the resource deadlock process one by one, this leaves all members of our development team suffering and suffering.

A deadlock refers to the execution of two or more concurrent processes in wince programming, if each process holds a certain resource and is waiting for other processes to release the resources they currently maintain, a mutual wait phenomenon is caused by resource competition. If there is no external force, they will not be able to proceed. It is said that the system is in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes.

 

2. Why does multi-thread resource competition occur?

Processes and threads are the most basic services of the wince kernel and the main components of the kernel. These two aspects of knowledge are the basic knowledge that a wince embedded software developer must possess. Only by mastering these knowledge can the services provided by the wince system kernel be fully utilized.

(1) What is multithreading?

When talking about multi-threaded resource security, we must first understand what is a thread. Wince is a preemptible multitasking operating system, also known as scheduling. Multithreading is a mechanism that allows concurrent execution of Multiple Instruction Streams in a program. Each instruction stream is called a thread and is independent of each other. Each thread has five states: running, pending, sleep, blocking, and terminating. Multithreading is a special form of Multi-task. multithreading aims to make multiple threads work in parallel to complete multiple tasks and improve system efficiency.

In addition to some resources (such as registers and stacks) that are essential for running, threads themselves do not occupy system resources. However, a thread can share all resources of a process with another thread of the same process. Like a process, one thread can create or withdraw another thread, and the threads in the same process can also execute concurrently. From the perspective of concurrency, not only can concurrent execution be performed between different processes, but also between multiple threads in the same process. Therefore, This method enables the wince system to have better concurrent execution capabilities, make more effective use of system resources, and greatly improve the system operation efficiency to a certain extent.

(2) multi-threaded Resource Competition originated from synchronization and Competition

It is extremely important to prevent access conflicts in resources shared by multiple threads in wince. Normally, the thread that is allowed to execute will first have exclusive access to variables or objects. When the first thread accesses an object, the first thread locks the object to be accessed, which causes other threads that want to access the same object to be blocked, until the first thread releases the lock it adds to the object. In the wince system, if a thread cannot obtain a required resource, it suspends the execution (blocking) until the resource is valid. Therefore, during the running of the wince system, each thread will lock or unlock the resource.

Therefore, it is possible that threads must communicate with each other and coordinate with each other to complete the task. For example, when multiple threads access the same resource together, you must ensure that the first thread cannot modify the data of the resource while reading the data of the resource, this requires communication between threads. Before a thread is ready to execute the next task, it must wait for the termination of the other thread to run, which also needs to communicate with each other. Generally, the synchronization methods in user mode include interlocked (atomic access) and critical section (ensure that resources in the critical section are not accessed by other threads ). In kernel mode, the synchronization methods include event object (thread sleep, while kernel execution waiting) and mutex object (similar to the critical section, but relatively slow ), the beacon object semaphore (used to limit the number of resource access) and Message Queue msgqueue (using a small memory to transmit messages.

 

 

3. How to Avoid the competition for resources in multiple threads?

Multi-threaded execution is one of the features of the wince system. We should make great efforts in programming to improve program execution efficiency. However, you must be careful when dealing with multiple threads to avoid multiple threads competing for resources. In general, there are different processing methods for different situations, mainly divided into the following two situations:

(1) Establish a synchronization mechanism to avoid mutual competition between processes

Under normal circumstances, the running of a process generally does not affect other running processes. However, if a process (or thread) with exclusive resources needs to have special requirements, other processes are not allowed to try to use the resource during its process running, this leads to the problem of process mutual exclusion and competition. The core idea of achieving process mutex is relatively simple: when a process is started, it first checks whether the current system has an instance of this process. If not, the process will successfully create and set the tag that identifies the existing instance. Then, when you create a process, you will be aware that its instance already exists through this tag, so that the process can only have one instance in the system. You can use a variety of methods, such as the memory image file, the number of famous events, the number of famous mutex, and global shared variables.

Generally, thread synchronization can be solved by using critical zones, mutex, semaphores, and events. Here we will introduce two most common methods: ① critical section: it is the most direct thread synchronization method. A critical section is a piece of code that can only be executed by one thread at a time. For example, if the code of the initialization array is placed in the critical section, the other thread will not be executed until the first thread finishes processing. ② Mutex mechanism: mutex is very similar to the critical zone, but there are two key differences: first, mutex can be used for cross-process thread synchronization. Second, the mutex can be assigned a string name, and an additional handle of an existing mutex object can be created by referencing this name.

In addition, it should be noted that the biggest difference between the critical section and the event object is the performance. When there is no thread conflict in the critical section, it usually takes 10-15 time slices, and the event object uses-time slice because it involves the system kernel. Therefore, the critical section is the best way to synchronize all threads in a process, because it is not a system level, but a process level.

(2) create a resource distribution chart to avoid process deadlocks

In general, multi-thread deadlock is related to the resource requirements of the process, the execution speed of the process, and the resource allocation policy. Therefore, if the deadlock cannot be avoided through design, a resource distribution chart should be created. By carefully tracking all the threads in the system and the shared resources they lock, the system periodically checks and maintains the resource distribution chart to promptly discover the characteristics of the loop wait, so as to identify potential deadlocks in advance.

To create a resource distribution chart, you must identify each protected shared resource and all threads pointing to a resource. These steps can generally be used: ① identify all system calls that may be blocked, because each protected shared resource always has some blocked calls related to access to it. ② Identify the blocked calls to obtain shared resources and find their calls in the source code. ③ For each call, record the thread Name Pointing to the resource and the Resource Name. Generally, the call itself transmits protected resources as a parameter. In this way, it can identify all protected resources and the threads that allocate resources. ④ Create a resource distribution chart and check whether any resource has a circular path.

Of course, it is impractical or impossible to pre-determine each shared resource and create a sharding graph. In this case, you can add some additional code to detect potential deadlocks during system running. For example, there are currently many different algorithms dedicated to optimizing this detection process, but in essence they are still dynamically creating a resource distribution chart.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.