30. in-depth understanding of computer system notes and concurrent programming (2)

Source: Internet
Author: User

1. Shared Variables

1) thread storage model

A thread is automatically scheduled by the kernel. Each thread has its own thread context ( Thread Context), Including a unique integer threadID(Thread ID, tid), Stack, stack pointer,ProgramCounters, general purpose registers, and condition codes. Each thread shares the remaining part of the process context with other threads, including the Virtual Address Space of the entire user, which is composed of read-only text (Code), Read/Data Writing, heap, and all the shared library code and data regions. In addition, threads share the same set of open files. [1]

Registers are never shared, while virtual memory is always shared.

The memory model for the separate thread Stacks is not as clean (NEAT and clear ). These stacks are contained in the stack area of the virtual address space, and are usually accessed independently by their respective threads. we say usually rather than always, because different thread stacks are not protected from other threads. so if a thread somehow manages to acquire a pointer to another thread ' S stack, then it can read and write any part of that stack. Example 29 Row in progress , Where the peer threads reference the contents of the main thread ' S stack indirectly through the Global PTRVariable.

2) map variables to memory

same as global variables, Read / write area only contains one instance of each local static variable declared in the program. The stack of each thread contains all of its local automatic variables.

3) Let's Talk About variables.VIs shared only when one of its instances is referenced by more than one thread.

Sample Code

/* $ Begin sharing */# include "csapp. H "# define N 2 void * thread (void * vargp); char ** PTR;/* global variable */INT main () {int I; pthread_t tid; char * msgs [N] = {"hello from foo", "hello from Bar"}; PTR = msgs; for (I = 0; I <n; I ++) pthread_create (& tid, null, thread, (void *) I); pthread_exit (null);} void * thread (void * vargp) {int myid = (INT) vargp; // CNT is shared, while myid is not shared static int CNT = 0; printf ("[% d]: % s (CNT = % d) \ n", myid, PTR [myid], ++ CNT);}/* $ end sharing */

2. Use semaphores for synchronization

when you share the same variable, when multiple threads are updated, each update has the process of "loading to registers, updating, storing and writing back to memory" for the variable, when multiple threads operate, there will be dislocation and confusion. It is necessary to protect the shared variables so that the update operation is atomic.

semaphores S A global variable with non-page integer values can only be processed by two special operations, called P , v operation.

P (s ):

While (S <= 0); s --;

V (s): S ++;

The P operation waits for the semaphoreSTo become nonzero, and then decrements it. The V Operation incrementsS.

1) The basic idea is to associate each shared variable (or a collection of related shared variables) with a semaphore. S(Initial Value1) And then useP (S), V (s)The Operation will enclose the corresponding critical section (a piece of code. In this way, the semaphores that protect shared variables are called binary semaphores (Binary semaphore), Because the value is always1,0 .

The definitions of P and V ensure that a running program can never enter a State where a properly initialized semaphore has a negative value.

11.4.4 availablePOSIXIntroduction to semaphores.

2) binary semaphores are usually called mutex locks. ExecutePThe operation is called locking,VAn operation is called unlocking. a thread that has locked a mutex lock but has not yet unlocked is called occupying the mutex lock.

3. Use semaphores to schedule shared resources

In this case, a thread uses a semaphore to notify another thread, and a condition in the program State is already true. Such as producer-Consumer issues.

Sample Code

# Ifndef _ sbuf_h __# DEFINE _ sbuf_h __# include "csapp. H "/* $ begin sbuft */typedef struct {int * Buf;/* buffer array */int n;/* Maximum number of slots */INT front; /* Buf [(front + 1) % N] is first item */INT rear;/* Buf [rear % N] is last item */sem_t mutex; /* protects accesses to Buf */sem_t slots;/* counts available slots */sem_t items;/* counts available items */} sbuf_t; /* $ end sbuft */void sbuf_init (sbuf_t * sp, int N); void sbuf_deinit (sbuf_t * SP); void sbuf_insert (sbuf_t * sp, int item ); int sbuf_remove (sbuf_t * SP); # endif/* _ sbuf_h _ * // source code/* $ begin sbufc */# include "csapp. H "# include" sbuf. H "/* Create an empty, bounded, shared FIFO buffer with N slots * // * $ begin sbuf_init */void sbuf_init (sbuf_t * sp, int N) {SP-> Buf = calloc (n, sizeof (INT); SP-> N = N; /* buffer holds max of N items */SP-> front = Sp-> rear = 0; /* empty buffer IFF front = rear */sem_init (& SP-> mutex, 0, 1);/* binary semaphore for locking */sem_init (& SP-> slots, 0, n);/* Initially, Buf has n empty slots */sem_init (& SP-> items, 0, 0);/* Initially, buf has zero data items */}/* $ end sbuf_init * // * clean up buffer sp * // * $ begin sbuf_deinit */void sbuf_deinit (sbuf_t * SP) {free (SP-> BUF );} /* $ end sbuf_deinit * // * Insert item onto the rear of shared buffer sp * // * $ begin sbuf_insert */void sbuf_insert (sbuf_t * sp, int item) {P (& SP-> slots);/* Wait for available slot */P (& SP-> mutex ); /* lock the buffer */SP-> Buf [(++ SP-> rear) % (SP-> N)] = item; /* Insert the item */V (& SP-> mutex);/* unlock the buffer */V (& SP-> items ); /* announce available item */}/* $ end sbuf_insert */* remove and return the first item from buffer sp */* $ begin sbuf_remove */INT sbuf_remove (sbuf_t * SP) {int item; P (& SP-> items);/* Wait for available item */P (& SP-> mutex ); /* lock the buffer */item = Sp-> Buf [(++ SP-> front) % (SP-> N)]; /* remove the item */V (& SP-> mutex);/* unlock the buffer */V (& SP-> slots ); /* announce available slot */return item;}/* $ end sbuf_remove * // * $ end sbufc */

Reference

[1]Http://www.cnblogs.com/mydomain/archive/2011/07/10/2102147.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.