Implementation of a lock-free queue-loop array __ Network programming

Source: Internet
Author: User
Tags cas faa

Address: http://www.cnblogs.com/chencheng/p/3527692.html


implementation of a lock-free queue-loop array Lock-Free design via CAS operation: CAS Atomic operations (Compare & Set): Contains three operands, memory value V, the old expected value Oldval, the new value to modify newval, and changes memory V to newval only if the value in memory V is the same as the old value oldval. An array queue is an array of loops, with less than one element in the queue, with the head equal to the end of the line, and the tail plus 1 equals the first sign team full. The elements of the array are marked with empty (no data, marked can be queued) and full (with data, labeled can be out of the team), and the array is initially initialized to empty the empty queue. Enque operation: If the current team tail position is empty, the marking thread can join in the current position, set the position to full by the CAS atomic operation, avoid other threads to operate this position, modify the tail position after the operation. Each thread competes for a new team tail position. As shown in the following illustration:

Thread T1/t2 Competition Tail position. T1 competition succeeds, the full tag is set first, and then the location is manipulated. T2 polls the location for full continuation polling. After the T1 operation completes, moves the tail position behind the team. T1/t2 again began to compete for the new team tail. DeQue operation: If the current team head position is full, the marked thread can be in the current position out of the team, through CAS atomic operation to set the position to empty, to avoid other threads to operate this position, after the operation, modify the team head position. Each thread competes for a new team head position. The operation is not locked, and each thread assumes that there is no conflict to complete the operation, and try again if the conflict fails.

#include "stdlib.h" #include "stdio.h" #include <pthread.h> #define MAXLEN 2 #define CAS __SYNC_BOOL_COMPARE_AND_SW
    AP typedef struct {int elem;

int status;//for state monitoring}node;
    typedef struct {node Elepool[maxlen];
    int front;
int rear;

}queue;

enum {EMPTY =1, full,};

Queue G_que;
    void Initque () {int i = 0;
    G_que.front = 0;
    
    g_que.rear = 0;
    for (i=0;i<maxlen;i++) {g_que.elepool[i].status = EMPTY;
} return;
        int enque (int elem) {do {if (g_que.rear+1)%maxlen = = G_que.front) {return-1; }}while (!
    CAS (& (G_que.elepool[g_que.rear].status), empty,full));
    G_que.elepool[g_que.rear].elem = Elem;
    printf ("in--%d (%lu) \ n", elem,pthread_self ());
    
    CAS (& (G_que.rear), G_que.rear, (g_que.rear+1)%maxlen);
return 0;
        int deque (int* pelem) {do {if (g_que.rear = = G_que.front) {return-1; }}while (! CAS (& (G_que.elepool[g_que.front].status), full,empty));
    *pelem = G_que.elepool[g_que.front].elem;
    printf ("out--%d (%lu) \ n", *pelem,pthread_self ());
    CAS (& (G_que.front), G_que.front, (g_que.front+1)%maxlen);
return 0; }
Lock-free design via CAs, FAA, FAS operation:FAA operation: Atomic plus 1 operation, returns the value before the update. FAs operation: Atomic minus 1 operation, returns the value before the update. Increasing writeablecnt indicates that the queue can also write the number of elements, readablecnt indicates the number of elements that exist in the queue. Used to control the number of threads that can be concurrently manipulated. Enque operation: by Atomic plus operation to each requested operation of the thread assigned to the only one location information stored in the local variable pos, each thread parallel operation of the corresponding position of information, no longer need polling waiting. As shown in the following illustration:

The T1/T2 thread initially operates two positions at the end of the queue. After the T1 operation, direct operation of the next team tail position. DeQue operation: If the current team head position is full, the marked thread can be in the current position out of the team, through CAS atomic operation to set the position to empty, to avoid other threads to operate this position, after the operation, modify the team head position. Each thread competes for a new team head position. Multiple threads can join the team at the same time, avoiding waiting for polling at the same location, and significantly improving efficiency.

#include "stdlib.h" #include "stdio.h" #include <pthread.h> #define MAXLEN #define NUM_THREADS 8 #define Num_
MSG #define CAS __sync_bool_compare_and_swap #define FAA __sync_fetch_and_add #define FAS __sync_fetch_and_sub
#define VCAS __sync_val_compare_and_swap int g_inputover = 0;
    typedef struct {int elem;
    Long threadId;

int Status;//indicate Whether the node can be read}node;
    typedef struct {node Elepool[maxlen];
    int front;
    int rear; int writeablecnt;//the Number of node that can is written int readablecnt;

The number of node that have been written}queue;

enum {EMPTY =1, full,};

Queue G_que;
    void Initque () {int i = 0;
    G_que.front = 0;
    g_que.rear = 0;
    g_que.readablecnt = 0;
    
    g_que.writeablecnt = MaxLen;
    for (i=0;i<maxlen;i++) {g_que.elepool[i].status = EMPTY;
} return;   
    int enque (int elem) {int pos = 0; if (FAS (& (G_QUE.WRITEABLECNT), 1)<= 0) {printf ("dis-%d (%u) \ n", elem,pthread_self ());
        FAA (& (G_QUE.WRITEABLECNT), 1);
    return-1;
    }//cas (& (G_que.rear), G_que.rear,g_que.rear%maxlen);
    CAS (& (G_que.rear), maxlen,0);
    pos = FAA (& (G_que.rear), 1)%maxlen;
    G_que.elepool[pos].elem = Elem;
    G_que.elepool[pos].threadid = Pthread_self ();
    printf ("in-%d (%u), inpos= (%d), rear= (%d) \ n", Elem,pthread_self (), pos,g_que.rear);
    CAS (& (G_que.elepool[pos].status), empty,full);
    FAA (& (G_QUE.READABLECNT), 1);
return 0;
    int deque (int* pelem, int* pthreadid) {//printf ("Readablecnt--%d,pos =%d\n", G_que.readablecnt,g_que.front);
        do {if (g_que.readablecnt = 0) {return-1; }}while (!
    CAS (& (G_que.elepool[g_que.front].status), full,empty));
    *pelem = G_que.elepool[g_que.front].elem;
    *pthreadid = G_que.elepool[g_que.front].threadid;
    CAS (& (G_que.front), G_que.front, (g_que.front+1)%maxlen); Fas(& (G_QUE.READABLECNT), 1);
    FAA (& (G_QUE.WRITEABLECNT), 1);
    printf ("out-%d (%u) (%u) \ n", *pelem,*pthreadid,pthread_self ());
return 0;
    } void* sendmsg (void *arg) {int msgno = 0;
        for (Msgno = 0; Msgno < num_msg; msgno++) {usleep (1000);
    Enque (Msgno);
    } g_inputover++;
return NULL;     
    int main (void) {int rc,i;
    pthread_t Thread[num_threads];

    int elem,threadid;
    Initque (); for (i = 0; i < num_threads; i++) {printf (

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.