Operating System Process Management

Source: Internet
Author: User

Operating System Process Management
Introduction

This article mainly introduces the process management logic in the operating system, including the typical process scheduling algorithm, the relationship between processes and threads, the mutual exclusion and synchronization algorithms between processes.

Basic knowledge

Process scheduling and management are an important part of operating system knowledge.

Each process has the following three States:

  1. Ready
  2. Run
  3. Blocking

Generally, a process is in the ready state at the beginning. After the CPU selects it to run, it enters the running state. When the process has an I/O operation, it becomes blocked. It is also possible that the time slice is used up and enters the ready state from the running state to wait for CPU scheduling.

Common scheduling algorithm FCFS

First Come First Service. The first-in-one service method.

The advantage of this method is that it is easy to implement and is also the most easy-to-think scheduling solution. However, there are two major problems:

  1. Negative for short Processes

    A short process can only run after the previous long process has been executed, and may wait for a long time.

  2. Negative for IO-intensive operations

    IO-intensive processes are worse than short processes. It is not easy to wait in the queue until he runs. The result is not running for a while because I/O is blocked. After I/O operations are completed, I have to wait in the queue again.

    Therefore, this algorithm is extremely inefficient for IO-intensive processes.

RR

Round Robin. The polling scheduling algorithm allocates a fixed time slice for each process. When the time slice is used up, it must be re-queued at the end of the team.

This design solves the first problem of FCFS, and also solves 2nd problems.

However, I/O-intensive processes are still not well solved. One optimization solution is to design two queues and put the IO-blocking processes in one queue separately, the process in this queue is revoked when the next running is selected.

Another complicated problem with FCFS is how to choose a time slice. If the time slice is too long, it degrades to the FCFS algorithm. If the time slice is too short, the switching overhead will be too large.

Prediction

Prediction-based algorithms. This type of prediction algorithm assumes that we know the total time required by each process and the IO proportion information.

If we can collect this information, we can use the following scheduling algorithms:

  1. The shortest run time is preferred.
  2. The shortest time is preferred.
  3. Sort by running time and remaining time

However, in the real world, this information is generally unpredictable. Even for the same process, the previous running status may not be of reference value to the currently running process. For example, in cat programs, different parameters have a great impact on the running time.

Feedback

This is optimized based on Prediction. If Prediction is to predict the amount of resources required by future processes, the Feedback algorithm determines the priority based on the resources consumed.

Generally, the longer the running time, the lower the priority of the scheduling algorithm.

Old Scheduling Algorithms of Unix Scheduling Algorithms

This is a scheduling algorithm earlier than version 2.6, which adopts the optimized RR algorithm. The priority algorithm for each process is as follows:

p(i) = base(i)+nice(i)+cpu(i)

The base and nice values are static and fixed and can be specified by users.

The cpu is an adjustment factor with the longer CPU usage, the lower the weight. The adjustment logic of this factor is as follows:

cpu(i) = cpu(i-1)/2

That is, after each selected process is scheduled, the value of the cpu factor corresponding to the next time will be divided by 2, reducing the weight of the next run.

New Scheduling Algorithm

The scheduling algorithm is rewritten after kernel 2.6. It is also called the O (1) algorithm.

This algorithm sets 100 ~ 139 a total of 40 priorities. The calculation of the priority of a process is similar to that of the old scheduling algorithm. The system then stores a bitmap. Each bitmap represents a process with a waiting priority. Then, select the queue with the highest priority and processes, and select the first process to run.

SMP Scheduling

The processing of multi-processor is similar to the preceding scheduling algorithm, but after selecting a process, you need to determine which CPU is suitable.

Generally, considering the local cache of the CPU, it is recommended that the process be scheduled to run on the previously running CPU. Of course, considering the balance of the CPU itself, there will certainly be migration work.

Thread-related user threads & kernel threads

A thread has two categories since its birth:User-level threadAndKernel-level thread.

Which of the following is common in Linux?Kernel-level threadThat is, all operations related to thread scheduling are implemented in the kernel.User-level thread.

Advantages of user-level threads:

  1. Low thread switching cost, no kernel operation required
  2. You can customize Thread Scheduling Policies.
  3. It has nothing to do with the operating system and can be quickly transplanted to another machine.

However, the user thread also has the following problems:

  1. The blocking of one thread will affect other threads, because the operating system cannot see other threads.
  2. The multi-core capability cannot be well utilized, because the operating system will put a kernel process on a CPU.

Currently, only kernel-level threads are used in Linux, and both of them are provided in Solaris.

Thread Switching

The context of a process consists of the following information:

  1. Program counters
  2. Register information
  3. Stack Information

A process switchover process includes:

  1. Save the context of the current process
  2. Add the current process to the corresponding queue of the Operating System
  3. Select another process using the Scheduling Algorithm
  4. Adjust the VM ing
  5. Load the context of the new process

But thread switching is different. It is better to switch the Code address pointed to by the PC register. Other operations are not required, so the thread switching cost is much lower than the process switching cost.

Introduction to mutex and Synchronization

When multiple processes need to access the same resource, in order to avoid confusion caused by simultaneous use of this resource, mutual exclusion between processes must be considered.

A typical mutex implementation scheme is as follows:

Solution Introduction
Disable interrupt It kills one thousand and self-loss is eight hundred. Although mutual exclusion can be achieved, the execution efficiency of the processor is greatly reduced. Moreover, in the multi-processor architecture, he cannot achieve mutual exclusion.
Dedicated machine commands It is often used to modify the value in memory through an uninterruptible command. Two Common commands are testset and exchange, and their corresponding demo code is as follows. The advantage of this solution is that it is easy to implement, and the disadvantage is that it is busy waiting, and there may be hunger and deadlocks. The operating system layer needs to be managed and avoided.
Deadlock Prevention

There are four conditions for a deadlock:

  1. Mutually exclusive: resources can only be used by one process at the same time
  2. Occupy and wait: when a process is waiting for other resources, it will continue to occupy its own resources
  3. Non-preemption: cannot forcibly seize resources occupied by others
  4. Loop wait: a deadlock occurs when the preceding three conditions are met.

To avoid deadlocks, You need to configure the following conditions:

  1. Mutex: This is for the normal operation of resource functions and cannot be avoided.
  2. Occupy and wait: allow the process to apply for all resources at the beginning to run. The problem is that the process may have to wait for a long time, the resources may be controlled for a long time, and the program needs to know the resources to be used at the beginning;
  3. Non-preemption: depending on the process priority, it is reliable to allow the resource application process to release its own resources or seize resources of other processes, the only problem is that the use of resources is not necessarily so easy to save and restore (many hardware can switch the process as they do with the processor)
  4. Loop wait: Define a sequence for resources. The process can only apply for resources according to the sequence, which will greatly reduce the process execution efficiency. Therefore, the following two mainstream methods are used:

The above several solutions have various problems, so they are generally not used. The most adopted solution is to start from, but not to be avoided in advance, but to perform dynamic detection:

  1. If a process is started or a new resource requirement causes a deadlock, the allocation is not allowed.

    Typical algorithms are as follows:Banker algorithmThe disadvantage of this method is that you need to know the resources that a process will occupy in the future.

  2. Allow all requests to periodically detect deadlocks

    Dynamic Detection results in high operation efficiency. However, if deadlocks occur frequently, the system runs less efficiently.

The comparison of all Deadlock Avoidance Methods is as follows:

Programming Interface

After the multi-process mutex and synchronization solutions support the series of hardware mentioned above, you need to consider the programming interface provided by the operating system for programmers with concurrent programming.

Semaphores

Semaphores maintain an int in the memory. Each operation ++ or -- the int --.

Two interfaces are provided for the operator:

  1. SemWait

    This interface checks the int value. If the value is greater than 0, it will take the value -- and enter the critical section; otherwise, the blocking detection will know that the value is greater than 0;

  2. SemSignal

    This interface adds the int value ++ and notifies all blocked processes. Which process is notified? Some adopt the FIFO policy, and some adopt the random policy.

Guan Cheng

The semaphore method is flexible, allowing programmers to control the critical section and interactive design at will. Most programs now adopt similar solutions. This is a relatively low-level but powerful solution.

However, it was suggested that the semaphores are scattered and may occur at any position in the module, making programming and maintenance difficult and prone to bugs. Therefore, in 1970s, some people have put forward the concept of a management process. In my actual work, the management process has not been used to achieve mutual exclusion and synchronization between processes.

The underlying layer of the canal process is similar to the semaphore, But it encapsulates all the logic of locking and unlocking in oneClassAll operations on this critical resource are in thisClassToFunctionIn additionClassNo locks are found anywhere. In this way, the lock-related logic is concentrated in one place.

InClassThere can be multiple locksSemaphoresSimilarly, for each lock, you can use a function similarsemWaitAndsemSignalOr release the lock.

Message transmission

The message transmission method is different from the lock method, because the synchronization between processes is mutex.BlockingAndExchange informationThe API provided by message passing is the underlying API, and other logic is handed over to the upper layer for control by the programmer.

Its APIs are as follows:

  1. Send (destination, message)

    Send request

  2. Receive (source, message)

    Receive request

The two interfaces are generally divided into the following categories based on whether they are blocked:

  1. Both send and receive are blocked.

    It is generally used for close synchronization between processes.

  2. Send is not blocked, and receive is blocked

    It is a common method. After sending, you can continue to do other things. However, before receiving the relevant information, the receive header must be blocked until the relevant information is confirmed.

  3. Neither send nor receive is blocked.

    Relatively rare.

GenerallyDistributed SystemThis method is used to synchronize and mutually exclusive processes when cross-machine writing is involved.

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.