FreeRTOS Series 19th---freertos signal volume

Source: Internet
Author: User
Tags inheritance mutex semaphore valid
This paper introduces the basic knowledge of semaphore, detailed source analysis see "FreeRTOS Advanced 6---freertos analysis of the signal volume" 1. Signal Volume Introduction

The FreeRTOS semaphore includes a binary semaphore, a count semaphore, a mutex (hereafter referred to as a mutex), and a recursive mutex (hereafter referred to as a recursive mutex).

We can think of the mutex and the recursive mutex as a special semaphore. Mutexes and semaphores differ in usage:

Semaphores are used for synchronization, inter-task or task-to-interrupt synchronization; mutexes are used for interlocking, to protect resources that can only have one task access, and a lock on the resource. When the semaphore is used for synchronization, it is usually a task (or interrupt) to give the signal, and the other task to get the signal; The mutex must get the signal in the same task and the same task. The mutex has precedence inheritance and no semaphore. The mutex cannot be used in the Interrupt service program, the semaphore can be. The API functions that create the mutex and create the semaphore are different, but the share gets and gives the signal API function;

2. Binary signal Volume

Binary semaphores can be used for both mutex and synchronization functions.

Binary semaphores and mutexes are very similar, but there are subtle differences: The mutex contains a precedence inheritance mechanism, and the binary semaphore does not have this mechanism. This allows the binary semaphore to be better used for synchronization (tasks or between tasks and interrupts), and the mutex is better used to implement simple mutexes. This section describes only the binary semaphores used for synchronization.

The Semaphore API function allows you to specify a blocking time. When a task attempts to acquire an invalid semaphore, the task goes into a blocking state, and the blocking time is used to determine the maximum time the task enters the block, and the blocking time unit is the system cycle time. If more than one task is blocked on the same semaphore, the task with the highest priority is unblocked first when the semaphore is valid.

You can treat a binary semaphore as a queue with only one item (item), so the queue can be empty or full (so called binary). Tasks and interrupts use the queue without paying attention to who controls the queue---just need to know if the queue is empty or full. This mechanism can be used to synchronize between tasks and interrupts.

Consider a situation in which a task is used to maintain peripherals. Using the polling method wastes CPU resources and prevents other tasks from being executed. A better practice is that most of the time the task is in a blocking state (allowing other tasks to execute) until certain events occur for the task to execute. This can be achieved using a binary semaphore: when the task takes a semaphore, because no specific event has occurred at this time, the semaphore is empty, the task goes into a blocking state, and when the peripheral needs maintenance, an interrupt service routine is triggered, which simply gives the semaphore (writes data to the queue). The task is to signal only, do not need to return, interrupt service just give signal.

The priority of tasks can be used to ensure that peripherals are maintained in a timely manner. Queues can also be used in place of binary semaphores. The interrupt routine captures the data associated with the peripheral event and sends it to the queue of the task. The task discovers that the queue data is unblocked when it is valid and data is processed if necessary. The second scenario makes the interrupt execution as short as possible, and other processes can be implemented in the task.

Note: You must never use an API function that ends with no "FROMISR" in an interrupt program.

Note: In most applications, task notifications can replace binary semaphores and are faster and generate less code.


Figure 1-1: Synchronization---Use semaphores between interrupts and tasks

As shown in Figure 1-1, when the program starts running, the semaphore is invalid, so the task is blocked at this semaphore. After a period of time, an interrupt occurs, using the API function Xsemaphoregivefromisr () in the Interrupt service program to give a signal that the semaphore becomes valid. When the interrupt service program is exited, the context switch is performed, the task is unblocked, the semaphore is taken away using the API function Xsemaphoretake (), and the task is performed. After the semaphore becomes invalid, the task enters the block again. 3. Count the semaphore

The binary semaphore can be considered a queue of length 1, and the count semaphore can be considered a queue with a length greater than 1. In addition, the semaphore consumer does not have to care about the data stored in the queue, only the queue is empty.

The usual count semaphore is used for the following two events:

Count event: In this case, whenever an event occurs, the event handler will give a signal (the semaphore value increases by 1), and when the event is processed, the handler takes the semaphore (the semaphore value minus 1). Therefore, the count is the number of events that occur and the difference in the amount of event handling. In this case, the count semaphore is created with a value of 0. Resource management: In this usage, the counted value represents the number of valid resources. The task must first obtain the semaphore to gain control of the resource. When the count value is reduced to zero, the resource is not represented. When the task is completed, it returns the semaphore---the semaphore value increases. In this case, when the semaphore is created, the calculated value equals the maximum number of resources.

Note: You must never use an API function that ends with no "FROMISR" in an interrupt program.

Note: In most applications, task notifications can be used instead of counting semaphores and are faster and generate less code. 4. Mutex Amount

A mutex is a binary semaphore that contains a priority inheritance mechanism. To achieve synchronization (between tasks or between tasks and interrupts), binary semaphores are a better choice, and mutexes are used for simple interlock.

The mutex used for interlocking can act as a token to protect the resource. When a task wants to access a resource, it must first obtain a token. When a task finishes using resources, you must return the token so that other tasks can access the same resource.

Mutexes and semaphores use the same API functions, so mutexes also allow you to specify a blocking time. The blocking time unit is the system cycle time, and the number indicates the number of system ticks that are most blocking when the mutex is invalid.

The difference between the mutex and the binary semaphore is that the mutex has a precedence inheritance mechanism. That is, if a mutex (token) is being used by a low-priority task, and a high-priority attempt is made to acquire this mutex, the high-priority task will go into a blocking state because the mutex is not obtained, and the low-priority task that is using the mutex will temporarily elevate its priority. The elevated priority is the same as the high-priority task that enters the blocking state. This process of priority promotion is called priority inheritance. This mechanism is used to ensure that high-priority tasks enter the blocking state for as short a time as possible and to minimize the "priority rollover" effect that has occurred.

In many cases, there is only one hardware resource, and when a low-priority task consumes that resource, even high-priority tasks can only wait for low-priority tasks to release resources. The phenomenon where high priority tasks are not operational and low priority tasks can be run is called priority rollover.

Why priority inheritance can reduce the impact of priority rollover. For example, there are now tasks a, Task B and Task C, and the priority order of three Tasks is task c> Task b> Task A. Both task A and task C use a single hardware resource, and the current task a occupies that resource.

Look at the case that there is no priority inheritance: Task C also uses the resource, but at this point task A is using the resource, so task C is blocked, and the order of precedence for the three tasks is not changed. After task C enters the block, a hardware generates an interrupt that wakes up an event that can unblock task B's blocking state. At the end of the interrupt, because task B has a priority greater than task A, task B preempted the CPU rights of task A. Then the blocking time for task C is at least: Interrupt processing time + task B run time + task A's run time.

Then there is the case of priority inheritance: Task C also uses the resource, but at this point task A is using the resource, so task C is blocked, and because priority a inherits the priority of task C, the priority order of the three tasks changes, and the new priority order is: Task c= task a> task B. After task C enters the block, a hardware generates an interrupt that wakes up an event that can unblock task B's blocking state. At the end of the outage, task a continues to gain CPU privileges because task A's priority is temporarily increased, which is greater than the priority of task B. When task A is complete, task C, which is in high priority, takes over the CPU. So the blocking time for Task C is: Interrupt processing time + task A's run time. See, Task C's blocking time is getting smaller, which is the advantage of priority inheritance.

Priority inheritance does not resolve priority inversion, only minimizing the impact of this situation. Hard real-time systems should avoid priority reversal at the outset of design.


Figure 4-1 mutexes are used to protect resources

As shown in Figure 4-1, the mutex is used to protect the resource. In order to access resources, the task must first obtain the mutex. Task A wants to get the resource, first it uses the API function Xsemaphoretake () to get the semaphore, and after the successful acquisition of the Semaphore, task A will hold the mutex and secure access to the resource. While task B begins execution, it also wants to access the resource, and task B also obtains the semaphore, but the semaphore is not valid at this time, and task B enters the blocking state. When task a finishes, use the API function Xsemaphoregive () to release the semaphore. After task B is unblocked, task B obtains and obtains the semaphore using the API function Xsemaphoretake (), and task B can access the resource. 5. Recursive Mutex

A task that has obtained a recursive mutex can repeatedly get the recursive mutex. Using the Xsemaphoretakerecursive () function to successfully obtain a few recursive mutexes, the xsemaphoregiverecursive () function is used to return several times, before which the recursive mutex is in an invalid state. For example, if a task succeeds in obtaining a recursive mutex of 5 times, the mutex is not valid for another task until it returns 5 times the recursive mutex.

Recursive mutex can be regarded as the semaphore with priority inheritance mechanism, and the task of obtaining recursive mutex must be returned after use.

The mutex cannot be used in an interrupt service program because:

The mutex has a priority inheritance mechanism, which makes sense only if it gets or gives the mutex in the task. Interrupts cannot be blocked by waiting for mutexes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.