Process synchronization
Before we mentioned the collaborative process, the collaboration process would affect other processes, that is, sharing the logical address space, that is, the shared memory system, in which case it is possible to access the same variable at the same time resulting in an error. There is also an independent process that is not affected by the process. The messaging collaboration process does not cause process synchronization issues.
So in this chapter we're talking about collaborative processes based on shared memory.
Producer-Consumer problem solutions//producer while (true) {while (counter==buffer_size); buffer[in]=nextproduced; in= (in+1)%buffer_size; counter++; }//------------------------------//consumer while (true) {while (counter==0); nextconsumed=buffer[out]; out= (out+1)% Buffer_size; counter--; }
Race condition: The result of concurrent access of multiple processes to the same data cutting data is related to the execution process successively.
Critical section: A critical section can only allow one process to execute.
The process is divided into the entry area, the critical section, the exit area and the remaining area. The entry area requires code that indicates the request enters the critical section, and the exit area has code that represents the exit critical section.
Critical area problem: There is time to enter, no space, etc., limited waiting.
Within the operating system, execution of kernel code also encounters a critical section problem.
When executing kernel code, it is divided into preemption kernel and non-preemption kernel.
Non-preemption kernel: After one process enters kernel mode, another process is not allowed to enter. Therefore, it does not lead to competitive conditions.
Preemption of the kernel data inconsistency occurs because processes that are allowed to execute in the kernel are preempted.
The SMP structure is more difficult to control because the process is on a different processor.
Peterson algorithm: Used to deal with critical area problems. The purpose of the Peterson algorithm is "humility", because first the opportunity is left to another person.
Do
{
Flag[i]=true;
Turn=j;
while (FLAG[J]&&TURN==J);
...
Flag[i]=false;
}while (TRUE);
Flag indicates that to enter the critical zone, turn is able to enter the critical zone.
In front of the software to synchronize, after the beginning of the hardware synchronization.
For a single processor, a non-preemption kernel is implemented by preventing interrupts from appearing when modifying shared variables.
For multiprocessor, it is not feasible to disable interrupts with the preceding notification because it is slow to send a disable interrupt to all processors. Therefore, a machine instruction is required to be able to execute atomically.
Atom: Non-interruptible.
The book directly testandset examples appear too sudden, not good to start, so check the information, gradually speaking.
do{
while (turn!=i);
Critical section
Turn=j;
Remaining area
}while (TRUE);
The problem with this example is that the turn=j in the exit area is too abrupt, because it does not satisfy the principle of free and progressive. If J does not enter, then no one can get in.
The following example adds a bool variable.
do{
Flag[i]=true;
while (Flag[j]); The problem is that if FLAG[J] does not go in, and is waiting, I will not be able to enter, so there is no solution to the conditions of the empty.
Critical section
Flag[i]=false;
Remaining area
}while (TRUE);
BOOL Testandset (bool *target)
{
bool* R=*target;
*target=true;
return R;
}
Do
{
while (Testandset (&lock));
...
Lock=false;
}while (TRUE);
Analysis: At the beginning, lock is false, return false, start entering the critical section, because no one is locked, and if lock is true, it means that someone is in the critical section and cannot enter.
Disadvantage: Although can resolve mutually exclusive and forward. But there is no solution to the limited wait.
void swap (Bool*a,bool *b)
{
BOOL Temp=*a;
*a=*b;
*b=temp;
}
Do
{
Key=true;
while (Key==true)
Swap (&key,&lock);
...
Lock=false;
}while (TRUE);
Analysis: The start lock is false,key to true, at first because the critical section is no one, so after the Exchange key is false, then enter the critical section.
In fact, the above two ideas are the same, if the first lock is false, it will jump out of the loop. But they can not meet the limited wait, because there is no one definition when multi-process to indicate when to enter the critical section. The disadvantage is the same as above.
The following implements a limited wait, because it will look like a circular queue round, if Waiting[j]==true and j!=i that J is waiting, then waiting[j]=false; if j==i, then lock=false;
-----------------------
Do
{
Waiting[i]=true;
Key=true;
while (Waiting[i]&&key)
Key=testandset (&lock);
Waiting[i]=false;
....
j= (i+1)%n;
while (J!=i&&!waiting[j])
j= (j+1)%n;
if (j==i)
Lock=false;
Else
Waiting[j]=false;
}while (TRUE);
The semaphore (i.e., the resource, which is represented by the integer size) can be used for synchronization, the semaphore is specified as S, which indicates the number of resources owned, atomic operation Wait (s), signal (s) representation-and + +.
The semaphore is divided into: The Count semaphore and the binary signal quantity.
Call wait when the process needs to use the resource.
But the semaphore has a disadvantage, is busy and so on. A busy wait means that when a process enters a critical section, other processes must continue to circulate in the while loop, which can only be entered when a process is out of the critical section. While wasting CPU time, so the solution is that when a process enters the critical section, it can be spinlock with a spin lock, which allows other processes to enter other critical sections to take advantage of CPU time.
The wait function puts this process into the waiting queue when the resource is exhausted, and invokes another process with CPU scheduling.
To determine the order of access to specific data for two processes
P1
Signal (S);
-------------------
P2
Wait (S);
-------------------
And the S is initialized to 0, it must be executed first P1.
The disadvantage of semaphores is a busy wait (but may also be an advantage, because context switches may consume more resources) because the wait operation is caused by this, called spin lock, because it has been in the spin ...
The spin lock scheme is when a process calls wait, if the resource is exhausted, blocking the wait queue into the semaphore, and if signal has resources again, then wakeup wakes him up into the in-memory ready queue.
struct semaphore
{
int value;
struct Process *list; Wait queue, signal will take a process out of the waiting queue to wake up
};
Wait (semaphore *s)
{
s->value--;
if (s->value<0) {
Block (); Go to the wait queue and re-enter
}
}
Signal (Semaphore *s)
{
s->value++;
if (s->value>0) {
Wakeup (p); Wake a process from the wait queue
}
}
A multi-processor can use spin locks to achieve synchronization, where one process runs on one processor and one process spins on the other.
There is only one requirement to perform PV operations. Is that wait and signal must be atomic.
In both single and multiprocessor (SMP), it is necessary to prohibit interrupts to resolve atoms.
Deadlock: Multiple processes may produce a deadlock for semaphores, a deadlock is a process in order for the wait (s) semaphore to wait for the B process signal (s), while the B process must wait for the a process signal (T) in order for wait, and both parties are in a deadlock state.
The FIFO implementation of the wait queue for semaphores may cause infinite blocking or starvation. That has been unable to execute.
Classic Sync Issues:
1. Limited buffering: The issue of producer consumers
Empty is the number of NULL buffer, full is the number of buffer the mutex is a mutex
Producers
Do
{
Produce
Wait (empty);
Wait (mutex);
Signal (mutex);
Signal (full);
}while (TRUE);
Consumers
Do
{
Wait (full);
Wait (mutex);
Signal (mutex);
Signal (empty);
}while (TRUE);
2. Reader-Writer questions
The reader writer's question is that it can have multiple readers and a writer.
The reader writer has the first reader-writer question and the second reader-writer question.
First reader-writer question: The reader does not need to wait until a writer has entered the critical section.
Second reader writer question: Once the writer begins to wait, the writer will write as soon as possible, and the read operation cannot be preempted.
Known mutex,wrt are binary semaphores, initialized to 1,readcount is the number of readers
Written by
Do
{
Wait (WRT);
Critical section
Signal (WRT);
}while (TRUE);
Readers
Do
{
Wait (mutex);
readcount++;
if (readcount==1)
Wait (WRT);
Signal (mutex);
Wait (mutex);
readcount--;
if (readcount==0)
Signal (WRT);
Signal (mutex);
}while (TRUE);
Java, of course, provides read and write locks, which can be used to facilitate process synchronization, but also to solve reader-writer problems, but with greater overhead.
3. The question of the Philosopher's meal
Semaphore Chopstick[5];
Do
{
Wait (Chopstick[i]);
Wait (chopstick[(i+1)%5]);
Eat
Signal (Chopstick[i]);
Signal (chopstick[(i+1)%5]);
Think
}while (TRUE);
This method causes a deadlock because the next statement cannot be executed when the philosopher executes the first statement. Abandon.
Workaround:
1. Allow up to 4 philosophers to sit on a table at the same time.
2. Only two chopsticks can be picked up.
3. The odd philosopher takes the left chopstick first, then the right chopstick, and the even philosopher instead.
The semaphore is still not secure enough, as it may be wrong to write. So the introduction of Enhancement (monitor).
The default is that only one process can be active within a pipe, so you do not have to write synchronization code manually.
Define the condition type variable to define additional synchronization mechanisms. For condition variables only wait and signal operations are available.
Wait only means that the process block,signal only means restarting the process. The condition variable is similar to the semaphore and has a wait queue. Used to suspend a process.
The realization of the meal problem of philosophers based on Tube process:
Monitor DP
{
enum{thinking,hungry,eating}state[5];
condition Self[5];
void Pickup (int i)
{
state[i]=hungry;
test (i);
if (state[i]!=eating)
Self.wait ();
}
void putdown (int i)
{
state[i]= thinking;
Test ((i+4)%5);
Test ((i+1)%5);
}
void Test (int i)
{
if (state[(i+4)%5]!=eating) &&state[i]==hungry&&state[(i+1)%5]!=eating)
{
state[i]==eating;
Self[i].signal ();
}
}
Initialization_code ()//initialization code, must have
{
for (int i=0;i<5;i++)
state[i]=thinking;
}
}
Remember: the operation of the tube is mutually exclusive by default.
The issue discussed below is how the condition variable waits for the queue restart process to be selected.
Usually with conditional queues, the usual priority is set as follows:
The process describes in advance the maximum requirements for resource allocation, and the process prioritizes the processes that are assigned to the shortest allocation request.
It is also important to note the correct use of enhancement's advanced operations.