Event-driven Framework (v)-implementation of the framework

Source: Internet
Author: User
Tags garbage collection message queue
Event-driven Framework (v)-implementation of the framework Description

Here first describe some of the QP strategy and source code.
For some reason this series first stops more. The following is mainly about the kernel. Implement 1. Critical section

Only one thread (process) is allowed to enter at a time in the critical section, and no other threads (processes) are allowed to enter. Therefore, the Code of the critical section is inseparable.
In the embedded system, the protection critical area is the lock interruption when entering the critical section, and the unlocking interrupt when exiting from the critical section. In systems that do not support lock interrupts, other mechanisms supported by the underlying operating system can be used.
In order to be able to do the unification, then add the macro in the header file, and then simply modify the method of the macro corresponding to the critical mode in the. h file according to the corresponding chip characteristics. Critical Type

1> The first is to save the common state of recovery. The most common critical zone implementation involves saving the interrupt state before entering the critical section and resuming the state after exiting from the critical section.
How to use the critical section below:

{
    unsigned int lock_key;    Save break state variable ...
    Lock_key = Get_int_status ();   Get the interrupt state from the CPU and save it to the variable
    int_lock ();         Interrupt plus lock ...
    /* Critical section of code *///   Critical area code ...
    Set_int_status (Lock_key); Break Unlock
}
#define QF_INT_KEY_TYPE unsigned INT
#define QF_INT_LOCK (key_) do {\
    (KEY_) = Get_int_status (); \
    Int_lock ( ); \
    } while (0)
#define Qf_int_unlock (KEY_) set_int_status (KEY_)

The main advantage is the ability to nest critical sections

2> the other is unconditional unlocking. A simpler and faster critical zone strategy is always unlocked unconditionally.

#define Qf_int_lock (KEY_) int_lock ()
#define Qf_int_unlock (KEY_) Int_unlock ()

The "Unconditional Lock and unlock" strategy is simple and fast, but does not allow nesting of critical sections because interrupts are always unlocked when exiting from a critical section, regardless of whether the interrupt has been locked when the interrupt is entered. The inability to nest critical sections does not mean that you cannot nest interrupts. Many processors have priority-based interrupt controllers, such as the Interl 8259A Programmable interrupt Controller in the PC, or the nested vector interrupt Controller Nvic integrated in the arm cortex-m3. Such interrupt controllers handle interrupt prioritization and nesting when interrupts reach the processor core. Therefore, you can safely unlock interrupts at the processor layer so that you can avoid nesting critical sections within the ISR. 2. Active Objects

Active object = Control thread + Event queue + state machine
The active object inherits the state machine (which can also be the PT coprocessor) referred to by event processing, and encapsulates the event queue and thread control into an object as the base object of the framework control. At the same time the active object is also encapsulated, leaving out initialization, construction, start object, dispatch, send event, stop thread, event delay and recovery interface. 0>.start () function

The Qactive_start () function creates a thread for the active object and reminds the framework to start managing the active object. In most cases, all active objects are only started once when the system is initialized.

Qactive_start () pseudo-function
void Qactive_start (qactive *me,
     uint8_t prio,/* priority */
     Qevent const *qsto[], uint32_t Qlen,/* Event queue */
     void *stksto, uint32_t stksize,/* Task stack */
     qevent const *IE)/* Initialize event */
{
    Me->prio = pri O /* Set Priority *
    /Qf_add (me);/* Adds the currently active object to the frame *
    /Qf_active_init_ (me, IE);/* Executes the active object initialization */

    * Create event queue */
}
3. Event Management 1>. event queue

The event queue is listed as a buffer that provides events, and prevents events from being sent too many times over a period of time, or when events are not processed. The event queue in the active object requires a single-read-write capability, which requires an appropriate mutex mechanism for the event queue to handle this situation.
The active object is generally managed by a ring event buffer. Depending on the policy in the previous chapter, the 0 replication event queue is supported, so the event buffer holds a pointer to the event, the pointer points to a dynamic event in the event pool, or it can point to a static event.
The event queue can be implemented by the RTOS message queue, or by a native event queue.
Organization definition of the event

typedef struct QEVENTTAG { 
qsignal sig;
uint8_t Dynamic_; /* Dynamic Event Number */
} qevent;

Dynamic is reserved for 0 static events, others represent the memory pool number and the cumulative number size.

/************** The allocation of event pools is not clear yet ******************************/

Initializes the event pool for the dynamic event. An application may call this function up to 3 times to initialize a maximum of 3 event pools. For possible fast event allocations, the event pool array must be sorted in descending order of block size.

Qf_epool_type_ Qf_pool_[3]; /* Allocate 3 event Pools */
uint8_t Qf_maxpool_;/* Initialized Event Pool */
* ... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... *
/void Qf_poolinit (void *poolsto, uint32_t poolsize, Qeventsize evtsize) {/
    * do not execute more than the number of supported memory pools */
    Q_require (Qf_maxpool_ < (uint8_t) Q_dim (qf_ Pool_));
    /* Application initializes the event pool in ascending order of event size */
    q_require ((Qf_maxpool_ = = (uint8_t) 0)
            | | (Qf_epool_event_size_ (Qf_pool_[qf_maxpool_-1]) < evtsize));
    /* Execute Initialize Event pool *
    /Qf_epool_init_ (Qf_pool_[qf_maxpool_], Poolsto, poolsize, evtsize);
    ++qf_maxpool_; /* Increase Event Pool */
}

Policies for assigning events from the event pool:

Qevent *qf_new_ (qeventsize evtsize, qsignal Sig) {
    qevent *e;
    /* Find the most suitable pool ID to accommodate the event size *
    /uint8_t idx = (uint8_t) 0;
    while (Evtsize > Qf_epool_event_size_ (Qf_pool_[idx])) {
        ++idx;
        Q_assert (idx < Qf_maxpool_); /* cannot exceed the scope of the initialization pool *
    /} qf_epool_get_ (Qf_pool_[idx], E);/* Wait for the allocation event from the found pool *
    /Q_assert (E! = (Qevent *) 0);/* If not, the event pool has been exhausted/
    E->sig = sig;/* Set event semaphore
    /* Dynamic properties of the store event:
    * Event Pool ID and counter is 0 */
    e-> Dynamic_ = (uint8_t) ((idx + 1) << 6);
    return e;
}

Garbage collection:

void qf_gc (qevent const *e) {
    if (e->dynamic_! = (uint8_t) 0) {/* determines if static events */
        Qf_int_lock_key_
        Qf_int_lock _ ();/* Locking */
        if ((E->dynamic_ & 0x3F) > 1) {/* Whether to accumulate counter >1 *
            /-((Qevent *) e)->dynamic_;/decrement Calculator */
  qf_int_unlock_ ();  /* unlock */
        }
        else {/* here to process the last event recycle */
            uint8_t idx = (uint8_t) ((e->dynamic_ >> 6)-1);/* Get Event Pool id*/
  
   qf_int_unlock_ ();/* unlock *
            /Q_assert (idx < Qf_maxpool_); 
            Qf_epool_put_ (Qf_pool_[idx], (qevent *) e); /* Put the event back in the pool */
        }}
}
  

Event latency and Recovery

The delay of an event is convenient when the event is at a particularly inconvenient time, but it can be delayed until the system has a better state to handle the event. (an AO native is also made a queue for event delay and recovery??。 ) under the implementation code for delay and recovery:

/* Event delay *
/void Qactive_defer (qactive *me, Qequeue *eq, qevent const *e) {
    (void) Me; 
    Qequeue_postfifo (EQ, e); /* Sends a deferred event to the given "raw" queue, which will increment the
                            reference counter for a dynamic event, so this event will not be recycled after the current RTC step (*/
}/

* Event Recovery
*/qevent Const * Qactive_recall (qactive *me, Qequeue *eq) {
    qevent const *E = qequeue_get (eq);/* Get event from delay queue *
    /if (E! = (Qevent *) 0 {/* The event is valid
        *
        /Qf_int_lock_key_ Qactive_postlifo (Me, E);/* The LIFO policy is sent to the event queue for the active object *
        /Qf_int_lock_ ();/* Plus lock *
        /if (e->dynamic_! = (uint8_t) 0) {/* is dynamic event */
        * At this moment the reference counter must be at least 2 because the event is referenced by a minimum of 2 event queues */
            Q_assert (E- >dynamic_ & 0x3F) > 1);
            --((Qevent *) e)->dynamic_; /* Decrement counter
        *
        /} qf_int_unlock_ ();
    }
    return e;
}
2> distribution of events

Send events directly
The Qactive_postfifo () and QACTIVE_POSTLIFO () functions support direct event forwarding

Release-Subscribe to Event delivery
Event dispatch under this strategy send events quickly by creating a lookup table to find all of the subscription objects for an event.

Before you can issue any events, you need to initialize the Subscriber lookup table. The implementation code for this part of the function is as follows:

typedef struct QSUBSCRLISTTAG {
    uint8_t bits[((qf_max_active-1)/8) + 1];/* Subscription Object list */
} qsubscrlist;

Qsubscrlist *qf_subscrlist_; 
Qsignal Qf_maxsignal_;

void Qf_psinit (Qsubscrlist *subscrsto, qsignal maxsignal) {
    qf_subscrlist_ = Subscrsto;
    Qf_maxsignal_ = maxsignal;
}

Subscribe to the signal function (the unsubscribe process is similar):

void Qactive_subscribe (qactive const *me, qsignal sig) {
    uint8_t p = me->prio;/* Get priority */
    uint8_t i = q_rom_byt E (Qf_div8lkup[p]); /* index I for multibyte byte index, in
                                        qf_div8lkup[p] = (p–1)/8*/
    qf_int_lock_key_
    q_require (((qsignal) Q_user_sig <= SIG)
            && (Sig < Qf_maxsignal_)
            && ((uint8_t) 0 < P) && (P <= (uint8_t) qf_max_ ACTIVE)
            && (qf_active_[p] = = Me));
    Qf_int_lock_ ();
    Qf_subscrlist_[sig].bits[i] |= q_rom_byte (qf_pwr2lkup[p]);/* In the Subscriber list, the bit corresponding to the priority of the active object is set *
    /Qf_int_unlock_ ();
}

The event release function must ensure that the event is not recycled by a subscriber until all subscribers to the event receive the event.

void Qf_publish (qevent const *e) {qf_int_lock_key_/* ensures that the signal sent is within the configured range.
    */Q_require (E->sig < Qf_maxsignal_);
    Qf_int_lock_ (); if (e->dynamic_! = (uint8_t) 0) {/* Dynamic event */+ ((qevent *) e)->dynamic_;/* Increase counter */} qf_int_unlock_ ( 
    );
        #if (qf_max_active <= 8)/* Use conditional compilation to differentiate between single bytes */{uint8_t TMP = qf_subscrlist_[e->sig].bits[0];
            while (tmp! = (uint8_t) 0) {uint8_t p = q_rom_byte (qf_log2lkup[tmp]); TMP &= Q_rom_byte (qf_invpwr2lkup[p]); /* Clear Subscriber bit */Q_assert (qf_active_[p]! = (Qactive *) 0);
        /* Assertion must be registered */* If the queue overflows using internal assertion */Qactive_postfifo (qf_active_[p], E);
        }} #else {uint8_t i = Q_dim (qf_subscrlist_[0].bits);
            Do {/* Iterates through all bytes in the subscription list */uint8_t tmp;
            I.;
            TMP = qf_subscrlist_[e->sig].bits[i]; while (tmp! = (uint8_t) 0) {uint8_t p = q_rom_byte (qf_LOG2LKUP[TMP]); TMP &= Q_rom_byte (qf_invpwr2lkup[p]);/* Clear subscriber bit */p = (uint8_t) (p + (i << 3));
                /* Adjust priority */Q_assert (qf_active_[p]! = (Qactive *) 0);/* Assert must be registered */* Use internal assertion if queue overflows */
            Qactive_postfifo (Qf_active_[p], E);
    }} while (I! = (uint8_t) 0); } #endif qf_gc (e); /* Run garbage Collection */}
4. Clock Management

(This part of the strategy is not very understanding)
In the current version, time events cannot be dynamic and must be statically allocated.
In time events, users can customize the time event to add more members (that is, custom information).
Here is the time structure and the functions that manipulate it:

typedef struct Qtimeevttag {qevent super;/* Inheritance Time event */struct Qtimeevttag *prev;/* link to Previous Time Event */struct Qtimeevttag *next; /* Link to next time event */qactive *act; /* Receive object for TIME event */Qtimeevtctr Ctr; /* Internal TIME decrement device */qtimeevtctr interval;

/* Clock cycle time */} qtimeevt;
void Qtimeevt_ctor (Qtimeevt *me, qsignal SIG);
                    #define Qtimeevt_postin (Me_, act_, nticks_) do {\ (me_)->interval = (qtimeevtctr) 0; \ Qtimeevt_arm_ (Me_), (act_), (Nticks_)); \} while (0) #define Qtimeevt_postevery (Me_, act_, nticks_) do {\ (me_)->int Erval = (Nticks_); \ qtimeevt_arm_ ((me_), (act_), (Nticks_));
\} while (0) uint8_t qtimeevt_disarm (qtimeevt *me);

uint8_t Qtimeevt_rearm (qtimeevt *me, qtimeevtctr nticks); void Qtimeevt_arm_ (Qtimeevt *me, qactive *act, qtimeevtctr nticks); 

System clock ticks and tick () functions
to manage the clock beats you need to add the tick () function. The typical operating rate of the system clock beats is 10 to 100Hz. The
function is designed to be called from the interrupt context. If the underlying Os/rtos does not allow access interrupts or if you want the ISR to be very short, you can invoke it from the task layer context. The function is not allowed to be nested and the same is called, should be called only by a task, ideally called by the highest priority task.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.