IPC is a number of applications of the IPC for Sys/bios processing of inter-core communication (the following Chinese names are translated by themselves, with the English =_=| |, and the blue in the following image shows the need to invoke the module's APIs, The red module indicates that only configuration (as configured in. cfg) is required, and the gray module represents non-mandatory): (1) Minimum usage (Minimal use): This situation is implemented through the inter-core notification mechanism (notification), and the information that a notification carries is very small (typically 32bits), so it is called minimal use. This approach is typically used to handle simple synchronization between cores, but it is not possible to handle complex message delivery.
In this case, we need to take advantage of the APIs functions of the Notify module, such as passing an event to a particular kernel via the notify_sendevent () function, and we can dynamically register the feedback function for a particular event. Because a notification (notification) carries very little information, only one event number can be sent to the processor, and the feedback function represented by the event number determines the subsequent action. Other data can also be sent in the form of function parameters. (2) Add data passing: This is the case in the previous minimum use mechanism, in the kernel to add a transmission link list elements of the data path, the implementation of this list is generally used to share content and through the gate (gates) to manage synchronization.
This situation is in the smallest use, adding a listmp module for sharing linked list elements. LISTMP module is a doubly linked list, in addition to listmp need to use the shared memory, so sharedregion module also need to be used, in addition listmp through the NameServer module to manage the name/value, Use the Gatemp module at the same time to prevent linked list elements from being read concurrently by multiple processors.(3) Adding dynamic allocations (add dynamic Allocation): In this case, the ability to dynamically assign list elements from the heap is added.
this situation, in the case of the above, adds a HEAP*MP module, which is primarily used to dynamically allocate memory from the heap to the linked list (4) powerful but easy-to-use messaging mechanism (powerful but easy-to-use messaging): In this case, the Messageq module is used to deliver the message.
In addition to the Notify notification mechanism, MESSAGEQ can also be used to achieve more complex inter-core communication, in which case only the Multiproc and Sharedregion modules can be configured, while the Ipc_start () The function will automatically implement the above gray module configuration for us. In this section, for the time being, we only take a simple minimum usage (Minimal use) case as an example, analyzing the CCS comes with a multi-core communication example, which transmits information between the eight cores, then we summarize the steps of the inter-core transfer method to realize the function of transmitting information between the master and slave nuclei.
first, open the CCS self-band routine The way to open the CCS self-contained routines is consistent with the new CCS project, except for the example we need in project templates and examples, where the example is to select C6678 in the IPC and I/O examples examples, and then enter the project name, CCS will copy the example to the project itself.
second, build the project, compile debugging to see the results of the run(1) Click Compile to see if there is an error (2) import the target profile. Ccxml, the choice here is still C6678 Device functional Simulator, Little Endian. (3) Click Debug, select all the Cores (4) Choose all the cores (through shift), then groups
(5) Select Group, click Run
III. Analysis of operational results This routine in addition to calling the Notify module, passing messages between the cores, triggering the action of the kernel through the feedback function, and using the semaphore module to ensure that the cores are executed sequentially, preventing preemption, where each core has a semaphore indicating whether it is currently executing or waiting for other cores to execute. (1) Each core printing
This is the result of the occurrence in main (), where each core executes its own main ().
Each nuclear registration event, and indicates that its feedback function
(2) The core 0 performs simultaneous release of the semaphore, before the core 0 releases the semaphore Semhandle, the other cores are processed to wait for the signal to release
Nuclear 0 by sending events to the core 1, triggering a feedback function, semhandle in the feedback function, note that the active semaphore is in the core 1
after activating the core 1 semaphore, the kernel 0 prints the result and waits for the result of its semaphore, and all the cores have a signal volume that is initially 0.
(3) The following is a total of eight cores, respectively executed Numloops (here is set 10 times)
The next nuclear signal is activated, and execution begins.
the next core of the current kernel is activated by the feedback function.
Completing the Send event
(4) Exit the task Loop while exiting the current kernel BIOS
Iv. configuration of Multi-core IPC(1) The IPC start-up is very simple, after importing the IPC header file, calling Ipc_start () in the main () function will be able to configure the modules required to start IPC according to the. cfg file, such as Ipc_start () calls Notify_start () by default. , but to start these modules, It is necessary to ensure that these modules are added in. cfg in advance (e.g. right-click module, select Use) (2) IPC configuration is done in. CFG, configuration IPC first need to affirm that the current notify and other related modules need to be stated in advance, if it is not clear that the IPC needs the relevant modules, it is best to use the self-contained IPC var Ipc = xdc.usemodule (' ti.sdo.ipc.Ipc '); (3) Set the number of cores to synchronize Ipc.procsync = Ipc.procsync_all; Over hereIpc.procsync_all says Ipc_start will automatically start all the cores. Ipc.procsync_pair indicates that only some cores are started and the kernel that needs to be started is started by Ipc_attach (), which is the default option Ipc.procsync_none indicates that Ipc_start () will not synchronize any of the core (4) cores between the connection methods Ipc_attach () and Ipc_detach () The use of these two functions requires that the. cfg file is configured with the Ipc.procsync_pair Ipc_attach is very simple to use, after Ipc_start () directly enter: Ipc_attach (#coreID), #coreID表示需要连接的核ID号, such as Ipc_attach (0), represents the connection core 0. However,
it is important to note that: a) The connection of the core must be in the order of the ID number from small to large, for example, the current core must be connected to the core 0 before the core 1 can be connected, then the core b) The other two cores must first meet the ID number of the first connection ID number small, such as only when the core 0 connection core 1, the core 1 can be connected to the core 0 c) Because the nuclear connection is not successful at once, it is generally necessary to add a loop waiting process, the general use of the following methods: while (Ipc_attach (#coreID) <0) { Task_sleep (1); } The use of Ipc_detach () is similar to Ipc_attach (), but its function is to disassociate.
v. Communication between the principal and subordinate nuclei The IPC inter-core communication example is described earlier, where each core has a connection with all the cores, and the connections between the cores are the same and bidirectional, and in many cases we do not need so many cores, or many nuclear connections are unnecessary, in which case the use of ipc.procsync_all is too non-efficient. Below we introduce the example is the master-slave communication between the core, select three cores, select a primary core, the other two are the secondary core, the main core core0 and the secondary nucleus are connected to each other, while the Vice-core core1 and the secondary core core2 is not connected, this master-slave communication mainly complete the following events A) The primary core sends events from the issue to the two, activating from the core, making it perform the task b) Once the two cores have completed their tasks, the primary is sent to the main nuclear mission, which continues to perform its tasks. This routine has the following main contents:(1)Setting Procsync in a. cfg file Change to Ipc.procsync = Ipc.procsync_pair;
(2)Defining the master-slave kernel ID
(3)Add the following code after the main () function Ipc_start ():
a) This is mainly based on the main core and the role from the core to add the connection task separately: The primary nucleus is connected to the two cores, and the core is linked to the primary core. b) After adding the inter-core connection, the registration event (3) for the inter-core connection is added to the task function Tsk0_func, respectively, according to the main kernel and the role of the core from the kernel to add the send and receive functions, the following is the main core for example
After the main kernel sends the event from the kernel separately, it activates the primary core's semaphore by Semaphore_pend and so on, and then continues the main nuclear task. At the same time only if the primary core activates the semaphore from the kernel's registration function by giving the event from the nuclear, the kernel can begin the task. (4) Results of simulation debugging
As a result, when an event from the core is received from the nucleus, the task begins at the same time, and when it is fully completed from the nuclear mission, the primary nucleus begins its mission.
Tips: A) It is important to note that the system_printf () function is not added to the registration function, and this function causes the following error to occur: Ti.sysbios.gates.GateMutex:line 97:assertion Failure:a_ Badcontext:bad calling context. See Gatemutex API doc for details.xdc.runtime.Error.raise:terminating execution b) The above code is not all Routine code: https://github.com/tostq/EasyMulticoreDSP/tree/master/6.IPC_notify
(Multi-core DSP QuickStart) 6.IPC usage + example Analysis