Inter-process communication of the Rt-thread kernel

Source: Internet
Author: User
Tags message queue mutex semaphore

This is seen in the concept of synchronization and mutual exclusion is very clear, reproduced from: http://www.cnblogs.com/King-Gentleman/p/4311582.html One, interprocess communication mechanism The IPC (inter-process communication, inter-process synchronization and communication) of the Rt-thread operating system contains interrupt locks, scheduler locks, semaphores, mutexes, events, mailboxes, message queues. Among the top 5, the main performance is inter-thread synchronization, and the mailbox and message queue behave as inter-thread communication. This article mainly introduces some of their characteristics and use occasions. 1, interrupt lock off interrupt is also called interrupt lock, is the most simple way to prohibit multi-task access critical section, even in the time-sharing operating system. When the interrupt is closed, it means that the current task is not interrupted by other events (because the entire system no longer responds to external events that can trigger a thread rescheduling), i.e. the current thread will not be preempted unless the task voluntarily abandons the processor control. The interrupt/Resume Interrupt API interface is implemented by BSP and is implemented differently depending on the platform. For example, the interrupt lock mechanism in the STM32 platform closes the interrupt function (rt_base_t rt_hw_interrupt_disable (void), which is used to close the interrupt and return the interrupt state before the shutdown interrupt. ) and the recovery interrupt function (void rt_hw_interrupt_enable (rt_base_t level), which restores the interrupt state before the call to the Rt_hw_interrupt_disable () function) is implemented. Warning: Because a shutdown interrupt can cause the entire system to fail to respond to external interrupts, it must first be necessary to ensure that the shutdown interrupt is very short, such as several machine instructions, when using a shutdown interrupt as a means of mutually exclusive access to the critical section. The method of using the interrupt lock to the operating system can be applied in any situation, and the other kinds of synchronization methods are implemented by the interrupt lock, it can be said that the interrupt lock is the most powerful and most efficient synchronous method. The main problem with using an interrupt lock is that the system will no longer respond to any interrupts during an outage and cannot respond to external events. Therefore, the impact of the interrupt lock on the real-time performance of the system is very large, when improper use will lead to the system completely without real-time (may cause the system completely deviate from the required time requirements), and used properly, it will become a fast and efficient synchronization method. For example, the quickest way to ensure exclusive operation of a line of code, such as an assignment, is to use an interrupt lock instead of a semaphore or mutex. 2, the scheduler lock and the same as the interrupt lock lock the scheduler can also let the current running task is not swapped out, until the scheduler unlocked. But a bit different from the interrupt lock is that the scheduler is locked, the system can still respond to external interrupts, and the interrupt service routine can still respond accordingly. So in useWhen the scheduler locks up the task synchronously, it is necessary to consider whether the critical resource of the task access will be modified by the interrupt service routine, and if it can be modified, it will not be suitable for synchronization in this way. In the Rt-therad system through the lock function (void rt_enter_critical (void), during the system lock scheduler, the system still responds to the interruption, if the interrupt wakes up the higher priority thread, the scheduler will not immediately execute it, The next schedule is not attempted until the Unlock scheduler function is called. ) and the Unlock function (void rt_exit_critical), when the system exits the critical section, the system accountant calculates whether there is currently a higher priority thread ready, and if there is a higher priority thread than the current thread is ready, it will switch to this high-priority thread to execute If no higher priority thread is ready, the current task will continue to execute. ) to implement the scheduling lock mechanism. Note: rt_enter_critical/rt_exit_critical can be nested multiple times, but each time a rt_enter_critical is called, the rt_exit_critical exit operation must be invoked in a relative manner. The maximum nesting depth is 65535. Scheduler locks can be conveniently used in situations where threads are synchronized with threads, and because they are lightweight, it does not burden the system interrupt response, but it is also obvious that it cannot be used to interrupt synchronization or notification between threads, and if the time to execute a scheduler lock is too long, Has an impact on the real-time nature of the system (because a scheduler lock is used, the system will no longer have a priority relationship until it is out of the state of the scheduler lock). 3. Semaphore semaphore is a lightweight kernel object used to solve the problem of synchronization between threads, which can be obtained or released by a thread to achieve synchronization or mutual exclusion. The semaphore is like a key that locks a critical section and allows access only to the key thread: The thread gets the key and allows it to enter the critical section, and then passes the key to the waiting thread that is queued at the back, leaving the subsequent thread to enter the critical section sequentially. Another potential problem that can result from using semaphores is thread-priority rollover. The so-called priority rollover problem is when a high-priority thread attempts to access a shared resource through a semaphore mechanism, if the semaphore is already held by a low-priority thread, and the low-priority thread may be preempted by some other medium-priority thread during the run, As a result, high-priority threads are blocked by many threads with lower priority, and real-time is difficult to guarantee. For example, there are three threads with a priority of a, B, and C, with priority a> B > C. Thread A, B is in a suspended state, waits for an event to fire, thread C is running, and thread C starts using a shared resource m. During use, thread a waits for an event to arrive and thread a becomes ready because it is higher than the thread C priority.So execute immediately. However, when thread a wants to use shared resource m, thread A is suspended and switched to thread C because it is being used by thread C. If thread B waits for an event to arrive at this point, thread B turns into a ready state. Because thread B has a higher priority than thread C, thread B starts running, and thread C starts running until it finishes running. Thread A can execute only if thread C frees the shared resource M. In this case, the priority has flipped and thread B runs before thread A. This will not guarantee the response time of high-priority threads. The priority inheritance algorithm is implemented in the Rt-thread operating system. Priority inheritance resolves the problem caused by priority rollover by raising the priority of thread C to thread A by the time that threads a is blocked. This prevents C (to indirectly prevent a) from being preempted by B. The priority inheritance protocol is to increase the priority of a low-priority thread that occupies a resource so that it has the same priority as all the threads that have the highest priority in the thread that waits for the resource, and then executes, and when the low-priority thread releases the resource, the priority returns to the initial setting. Therefore, the thread that inherits the priority avoids the preemption of the system resources by any intermediate-priority threads. Thread synchronization is the simplest type of signal volume application. For example, two threads are used to perform control transfers between tasks, and the value of the semaphore is initialized to have 0 semaphore resource instances (the value of the semaphore is initialized to 0), while the waiting line enters upgradeable directly on the semaphore. When the signal thread completes the work it is working on, it releases the semaphore to wake up the thread waiting on the semaphore and let it perform the next part of the work. Such occasions can also be seen as the use of semaphores for work completion flags: The signal thread completes its own work, and then notifies the waiting thread to continue the next part of the work. Lock, a single lock is often applied to access to the same critical section across multiple threads. When used as a lock, the semaphore typically initializes the semaphore resource instance to 1 (the value of the semaphore is initialized to 1), which means that the system has a resource available by default. When a thread needs access to a critical resource, it needs to obtain the resource lock first. When this thread succeeds in obtaining a resource lock, other threads that intend to access the critical section will be suspended on that semaphore because the lock has been locked when other threads attempt to acquire the lock (the semaphore value minus 1 becomes 0). When the thread that gets the semaphore is processed and exits the critical section, it releases the semaphore and unlocks the lock, while the first waiting thread that hangs on the lock is awakened to gain access to the critical section. Because the semaphore value always changes between 1 and 0, this type of lock is also called a two-value semaphore. The semaphore can also be conveniently applied to the synchronization between interrupts and threads, such as an interrupt trigger, and the interrupt service routine needs to notify the thread to do the appropriate data processing. This time you can set the initial value of the semaphore is 0, the thread in the attempt to hold this semaphore, because the initial value of the semaphore is 0, the thread directly on the semaphoreHangs until the semaphore is released. When the interrupt is triggered, a hardware-related action is performed, such as reading the corresponding data from the hardware's I/O port, confirming the interrupt to clear the interrupt source, and then releasing a semaphore to wake up the appropriate thread for subsequent data processing. Warning: Interrupts cannot be used as a semaphore (lock) in a way that is mutually exclusive to a thread, but an interrupt lock should be used. Resource counts are suitable for situations where the speed mismatch between threads can be used as a count of the completion of a front-line work, and when dispatched to the back-thread, it can process several events at once in a sequential manner. For example, in producer and consumer issues, the producer can release the signal multiple times, and then the consumer is dispatched to be able to process multiple resources at once. Note: The general resource count type is mostly a mixed-mode inter-thread synchronization, because there is still multiple access to threads for a single resource processing, which requires a separate resource to access, process, and lock the way of mutually exclusive operations. 4, mutual exclusion is called mutually exclusive amount of signal, is a special two value of the signal volume. It differs from the semaphore in that it supports mutex ownership, recursive access, and features that prevent priority rollover. There are only two states of the mutex, unlocking or latching (two status values). When a thread holds it, the mutex is locked, and it takes ownership of it. Conversely, when the thread releases it, it unlocks the mutex and loses its ownership. When a thread holds a mutex, the other thread will not be able to unlock it or hold it, and the thread holding the mutex can acquire the lock again without being suspended. This feature differs greatly from the general two-value semaphore, in which the thread recursion holds an active suspend (eventually forming a deadlock) in the semaphore because no instances exist. Warning: After obtaining the mutex, release the mutex as soon as possible, and in the process of holding the mutex, you must not change the priority of the holding mutex thread. The use of mutexes is relatively single, because it is one of the semaphores, and it exists in the form of locks. At the time of initialization, the mutex is always in the unlocked state, and when the thread is held, it is immediately locked. The mutex is more suitable for:? The thread holds (gets) the mutex multiple times. This avoids the problem of deadlock caused by multiple recursive holding of the same thread. There may be a case of priority rollover due to multi-threaded synchronization, and it is also necessary to remember that the mutex cannot be used in the interrupt service routine. The semaphore can be used to interrupt the thread synchronization. 5, the event is mainly used for synchronization between threads, and the signal volume is different, it is characterized by a one-to-many, many-to-many synchronization. That is, a thread can wait for multiple events to be triggered: either an event wakes up the thread for event processing, or a few events arrive before the thread is woken up for subsequent processing, and the event can also be multiple threads synchronizing multiple events. A collection of these multiple events can be represented by a 32-bit unsigned integer variable that changesEach bit of the volume represents an event, and the thread is associated with one or more events by "logical and" or "logical or" to form an event set. The "logical OR" of an event is also known as a stand-alone synchronization, meaning that the thread synchronizes with one of any events, and the event "logic" is also known as an associative synchronization, meaning that the thread is synchronized with several events. The events defined by Rt-thread have the following characteristics:? Events are related only to threads, and events are independent of each other: each thread has 32 event flags, which are recorded with a number of unsigned integers, each of which represents an event. A number of events constitute an event set;? Events are only used for synchronization and do not provide data transfer function;? An event is not queued, that is, sending the same event to a thread multiple times (if the thread has not yet read), the effect is equivalent to sending it only once. In the Rt-thread implementation, each thread has an event information token, which has three properties, namely Rt_event_flag_and (logical AND), Rt_event_flag_or (logical OR), and Rt_event_flag_ Clear (Clears the tag). When a thread waits for an event to synchronize, it is possible to determine whether the currently received event satisfies the synchronization condition by 32 event flags and this event information token. Events can be used in a variety of situations, which can replace semaphores to some extent for inter-thread synchronization. The thread or interrupt service routine sends an event to the event object, and the waiting thread wakes up and processes the corresponding event. However, unlike semaphores, the sending operation of an event is not cumulative until the event is cleared, and the release action of the semaphore is cumulative. Another feature of the event is that the receiving thread can wait for multiple events, that is, multiple events that correspond to one thread or multiple threads. At the same time, depending on the parameters that the thread waits for, you can choose whether to "logical" or "trigger" or "logical". This feature is also not available in semaphores, which can only recognize a single release action, and cannot wait for multiple types of releases at the same time. Each event type can be sent individually or together with the event object, and the event object can wait on multiple threads, and they only pay attention to the events they are interested in. When events with which they are interested occur, the thread is awakened and subsequent processing actions are performed. 6. Mailbox service is a typical communication method in real-time operating system, which is characterized by low cost and high efficiency. Each message in the mailbox can only hold a fixed 4-byte content (for a 32-bit processing system, the pointer size is 4 bytes, so a message can hold a pointer). A typical mailbox is also known as an Exchange message, and a thread or interrupt service routine sends a 4-byte message to the mailbox. One or more threads can receive these messages from the mailbox for processing. The mailbox communication mechanism adopted by the Rt-thread operating system is somewhat similar to the traditional pipeline for inter-thread communication. Non-blocking e-mail delivery processThe ability to safely apply to interrupt services is an effective means of threading, interrupting services, and sending messages to the thread by the timer. Typically, the message collection process may be blocked, depending on whether there is mail in the mailbox and the time-out that is set when the message is received. When a message does not exist in the mailbox and the time-out is not 0 o'clock, the message charging process becomes blocked. So in such cases, only the thread will be charged for the message. Rt-thread the mailbox in the operating system can hold a fixed number of messages, the mailbox capacity is set when the mailbox is created/initialized, each message size is 4 bytes. When you need to pass a larger message between threads, you can send a pointer to a buffer as a message to the mailbox. When a thread sends a message to a mailbox, if the mailbox is not full, the message is copied to the mailbox. If the mailbox is full, the sending thread can set the time-out, choose whether to wait for a pending or return-rt_efull directly. If the send thread chooses pending waits, the process of waiting for the pending send thread to be woken up continues when the message in the mailbox is charged and the space is vacated. When a thread receives a message from a mailbox, if the mailbox is empty, the receiving thread can choose whether to wait until a new message is received, or to set a time-out. When the time-out period is set, the mailbox still does not receive the message, and the selection timeout waits for the thread to wake up and return-rt_etimeout. If there is a message in the mailbox, the receive thread replicates 4 bytes of mail in the mailbox to the receive thread. A mailbox is a simple inter-thread messaging method that can deliver 4-byte messages at a time in the implementation of the Rt-thread operating system, and the mailbox has some storage capability to cache a certain number of messages (the number of messages is determined by the capacity specified when the mailbox was created and initialized). The maximum length of a message in a mailbox is 4 bytes, so the mailbox can be used for messages that do not exceed 4 bytes, and can no longer be used when the message is longer than this number. Most importantly, the 4-byte content on a 32-bit system is suitable for placing a pointer, so the mailbox is also suitable for a case that only passes pointers. 7, Message Queuing Message Queuing is another common method of inter-thread communication, it can receive from the thread or interrupt service routines in the non-fixed length of the message, and the message is cached in its own memory space. Other threads are also able to read the corresponding message from the message queue, and when the message queue is empty, the read thread can be suspended. When a new message arrives, the suspended thread is woken up to receive and process the message. Message Queuing is an asynchronous way of communicating. Through the Message Queuing service, a thread or interrupt service routine can put one or more messages into the message queue. Similarly, one or more threads can get messages from the message queue. When more than one message is sent to a message queue, the message that first enters the message queue should be passed to the thread first, that is, the thread first gets the message first into the message queue, the FIFO principle (FIFO). The Message Queuing object of the Rt-thread operating system consists of multiple elements, and when Message Queuing is created it is assigned a Message Queuing control block: The message queue name, the memory buffer, the message size, and the queue length. At the same time, each message queue object contains multiple message boxes, each message box can hold a message, and the first and last message boxes in the message queue are referred to as the message link header and the Message chain footer, corresponding to the Msg_queue_head and msg_queue_tail in the Message Queuing control block. Some message boxes may be empty, and they form a list of idle message boxes through Msg_queue_free. The total number of message boxes in all message queues is the length of the message queue, which can be specified when Message Queuing is created. Message Queuing can be applied to situations where a long message is sent, including a message exchange between threads and threads, and a message sent to a thread in an interrupt service routine (the interrupt service routine is unlikely to receive a message). The obvious difference between Message Queuing and mailbox is that the length of the message is not limited to 4 bytes, and the message queue also includes a function interface for sending emergency messages. However, when you create a message queue that has a maximum length of 4 bytes for all messages, the Message Queuing object is degenerated into a mailbox. In the general system design will often encounter the problem of sending synchronous messages, this time can be based on the status of the corresponding implementation: Two threads can be implemented in the form of [message Queue + semaphore or mailbox]. The sending thread sends the corresponding message to the message queue in the form of a message, and then sends it to receive confirmation from the receiving thread. The mailbox is a confirmation flag, which indicates that the receiving thread can notify the sending thread of some status value, and the semaphore can only be a single notification sending thread as a confirmation flag, and the message has been confirmed to be received. IPC control block: Copy code in include/rtdef.h/** * IPC flags and Control command definitions */#define RT_IPC_FLAG_FIFO 0x00/**< FIFO Ed IPC. @ref IPC. */#define Rt_ipc_flag_prio 0x01/**< prioed IPC. @ref IPC. */#define RT_IPC_CMD_UNKNOWN 0x00/**< UNKNOWN IPC Command */#define RT_IPC_CMD_RESET 0x01/**< RESET IPC Object */# Define RT_WAITING_FOREVER-1/**< Block FOREVER untiL Get resource. */#define RT_WAITING_NO 0/**< Non-block. *//** * Base Structure of IPC object */struct rt_ipc_object{struct rt_object parent;/**< inherit from Rt_object */// Derived from the kernel object rt_list_t suspend_thread; /**< threads pended on this resource *///thread hangs linked list}; Copy code iii. IPC inline function: Copy code SRC/IPC.C rt_inline rt_err_t in Rt_ipc_object_init (struct Rt_ipc_object *IPC) {/* Init IPC Object */Rt_list_init (& (Ipc->suspend_thread));//Initialize thread to suspend linked list return rt_eok;} Copy Code Copy Code/** * This function would suspend a thread to a specified list. IPC object or some * Double-queue object (mailbox etc) contains this kind of list. * * @param list the IPC suspended thread list * @param thread the thread object to be suspended * @param flag the IPC Obje CT flag, * which shall be rt_ipc_flag_fifo/rt_ipc_flag_prio. * * @return The operation status, Rt_eok on successful */rt_inline rt_err_t rt_ipc_list_suspend (rt_list_t *list, struct RT _thread *thread, rt_uint8_t flag) {/* suspend thread */Rt_thread_suspend (Thread);//suspend thread switch (flag) {case RT_IPC_FLAG_FIFO://fifo mode Rt_list_insert_before (list, & (Thread->tlist));// Directly into the end of the queue break; Case Rt_ipc_flag_prio://Thread priority mode {struct Rt_list_node *n; struct rt_thread *sthread;/* Find a suitable position */for ( n = list->next; n! = list; n = n->next)//traverse the semaphore's pending list {sthread = Rt_list_entry (n, struct rt_thread, tlist);/* Find out */if (thread->current_p Riority < sthread->current_priority)//Find a suitable location by priority {/* insert this thread before the Sthread */Rt_list_insert_before (& (Sthread->tlist), & (Thread->tlist));//Add a thread to the list break; }/* * found a suitable position, * Append to the end of Suspend_thread list */if (n = = list) Rt_list_insert_before (List, & (Thread->tlist));//Do not find a suitable location, then put to the end} break; } return Rt_eok;} Call Rt_ipc_list_suspend to suspend the current thread, which means adding the current thread to the pending list of semaphores, where there is a flag parameter, which is Sem->parent.parent.flag (set when the semaphore is initialized), There are two kinds of rt_ipc_flag_fifo,rt_ipc_flag_fifo, the former means to put the suspended linked list in a FIFO way, the latter is based on the priority level of the thread itself to decide where to put the linked list, because each time a semaphore is released, only from theThe semaphore suspends the first thread on the linked list, so the position on the suspended thread list determines the wake-up order of the suspended thread when the signal arrives. Copy Code Copy Code/** * This function would resume the first thread in the list of an IPC object: *-Remove the thread from suspend Q Ueue of IPC Object *-Put the thread into system ready Queue * * @param list the thread list * * @return the Operation St ATUs, Rt_eok on successful */rt_inline rt_err_t Rt_ipc_list_resume (rt_list_t *list) {struct rt_thread *thread;/* Get thre AD entry */thread = rt_list_entry (list->next, struct rt_thread, tlist);//Get Thread Rt_debug_log (RT_DEBUG_IPC, ("Resume THR Ead:%s\n ", thread->name)); /* Resume It */rt_thread_resume (thread);//wake this thread to return rt_eok;} The function Rt_ipc_list_resume only wakes the first suspended thread in the semaphore. Normal wake-up of suspended threads (such as acquiring semaphores, mutexes, etc.) does not modify the thread's error value, that is, the original value of error rt_eok unchanged. Copy Code Copy Code/** * This function will resume all suspended threads in A list, including * suspend list of IPC object and private list of mailbox etc. * @param list of the threads to resume * * @return The operation status, Rt_eok on successful */rt_inline rt_err_t rt_ipc_list_resUme_all (rt_list_t *list) {struct rt_thread *thread; register rt_ubase_t temp;/* Wakeup all suspend threads */while (!rt_ List_isempty (list)//traverse thread suspend linked list {/* Disable interrupt */temp = rt_hw_interrupt_disable ();//off interrupt/* Get next Suspend thread */thread = rt_list_entry (list->next, struct rt_thread, tlist);//Get thread/* Set error code to RT_ERROR */Thread->erro R =-rt_error; Set the thread's error code to-RT_ERROR/* * Resume thread * In Rt_thread_resume function, it'll remove current thread from * Suspend list */Rt_thread_resume (thread); Wakes up this thread, indicating an exception wakeup/* Enable interrupt */rt_hw_interrupt_enable (temp); Open interrupt} return rt_eok;} Wakes all the threads in the suspended list. It is important to note that the error value of the awakened thread will be set to-rt_error, to flag that this thread is awakened by an exception, is not normally fetched to the IPC kernel object (such as semaphore, mutex, etc.) and is awakened, which will be judged by the thread's error value in the Take function. Copy Code

Rt-thread inter-process communication of the kernel

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.