Scheduling of processes, threads, and processors

Source: Internet
Author: User

The text comes out to: http://blog.sina.com.cn/s/blog_5a2bbc860101gedc.html

(1) Concept of the process (Dijkstra)

A process is a computation activity on a data set that can be executed concurrently, and is also the basic unit of the operating system for resource allocation and scheduling.

(2) links and differences between processes and procedures

The ① program is an ordered set of instructions, which itself does not have any meaning of running, and is a static concept. The process is a process of execution of the program on the processing machine, it is a dynamic concept.

② program can be used as a software data for a long time, and the process has a certain life period. The program is permanent and the process is temporary.

Note: The program can be thought of as a recipe, while the process is cooked according to the recipe.

③ process and program composition are different: The process is composed of the program, the data and the Process control block three parts.

④ Process and program correspondence: through multiple executions, a program can correspond to multiple processes; by invoking a relationship, a process can include multiple programs.

(3) Characteristics of the process

Dynamic: A process is the execution of a program, and the process has a life cycle.

Concurrency: Multiple processes can be stored in memory and can be executed concurrently over a period of time.

Independence: The basic unit of resource allocation and dispatch.

Constraints: There is a restrictive relationship between concurrent processes, which results in the unpredictable execution speed of the process, and must coordinate the concurrent execution order and relative execution speed of the processes.

Structure characteristics: The process consists of the program block, the data block, the process control block three parts.

Three basic states of the process:

(1) operating state (running)

When a process gets a processor, its executor is processing the state of the runtime on the machine called the running state.

In a single CPU system, at most one process at any time is running. In a multi-CPU system, the number of processes in the running state is the maximum number of processors.

(2) Readiness status (Ready)

When a process is ready, once the CPU is available, it can be run immediately, where the status of the process is called the ready state. There is a ready process queue in the system, in the ready state the process is present in the queue by some scheduling policy.

(3) waiting state (blocking state) (wait/blocked)

If a process is waiting for an event to occur (such as waiting for the completion of an input-output operation), the state of the temporary stop execution is called the wait state. A process that is in a wait state does not have a running condition and cannot execute even if it is given a CPU. There are several wait process queues in the system (the corresponding wait queue is composed of waiting events).

run to wait: waits for an event to occur (such as waiting for I/O to complete)

waiting to be ready: an event has occurred (such as I/O completion)

Run to Ready: time slices to (for example, two lessons time to, class out) or a higher priority process, the current process is forced to let go of the processor.

ready to run: when the processor is closed, the scheduling (dispatch) program selects a process from the ready process queue to occupy the CPU.

The above three states are the most basic state of the process, in the actual operating system implementation, the process is far more than these three states.


Why does the process have a "pending" status?

because the system is constantly creating processes, system resources, especially main memory can not meet the requirements of the process, at this time, some processes must be suspended (suspend), placed in the disk swap, freeing its resources, temporarily do not enable low-level scheduling, the purpose of smoothing the load.


Description and composition of the process

The process content and its state collection are called process images. Including:

Process Control Blocks: Each process has a process control block that stores identity information, field information, and control information for the process.

Program BLOCK:

Core Stack: each process bundles a core stack, which is used to save interrupts/abnormal sites when the kernel mentality is working.

Data BLOCK: Store program private data, user stack is also opened in the data block.

Process context

The environment in the operating system that runs the process physical entity and the supporting process is called the process context.

The process runs in its current context, and when the system dispatches a new process to take possession of the processor, the new and old processes have a context switch. That is, the state of the old process is saved and the status of the new process is mounted so that the new process runs

Process Context Composition

User-level context: consists of the body (program), data, shared storage, user stack, which occupies the virtual address space of the process.

Registers the context: by the program status Word register, the instruction counter, the stack pointer, the control register, the general register and so on.

System-Level context: consists of process Control block, main memory management information (page table or Segment table), core stack and so on.

Process Control block (BLOCK,PCB)

Each process has and has only one process control block

The PCB is the data structure that the operating system uses to record and engrave the process state and related information, it is the unique data structure of the operating system mastering process.

The system uses the PCB to control and manage the process, so the PCB is the only sign of system-aware process existence

The process is one by one corresponding to the PCB, and when the process is created, the PCB is built and accompanies the process until the process is undone. The PCB is like our account.

PCB of the content

① identification Information

Process ID: Unique, usually an integer

Process Group Identity ID

User process Name

User group name

② Site Information

Register contents (General register contents, control register contents, stack pointers, etc.)

③ Control Information

Process scheduling information: such as process state, wait time, wait reason, process priority, queue pointer, etc.

Process composition information: such as body segment pointer, data segment pointer, process family information

interprocess communication information: such as Message Queuing pointers, semaphores used, and locks

Process segments, page table pointers, process images in the secondary address

CPU occupancy and usage information: such as time slice remaining amount, occupied CPU time, elapsed time sum, timer information, accounting information

Process privileged information: such as main memory access rights, processor privileges

List of resources: all resources required, resources already available

Process queue and its management

All PCBs in the same state are organized together in a data structure called a process queue. such as running queues, ready queues, waiting queues.

The PCB of the same state process can be queued either on a first-come-first-served basis or in a queue by priority or other principles.

General Queue Organization mode:

Linear mode

how to link

Index mode

(1) Linear mode

Depending on the maximum number of processes, the OS statically allocates a block of space in main memory, and all of the process's PCBs are organized in a linear table.

Advantages: simple;

Disadvantage: Limit the maximum number of processes in the system,

Regular scanning of the entire linear table, scheduling efficiency is low.

(2) How to link

A process PCB of the same state is linked to a queue by a link pointer.

Processes of different states can be queued in different queues, such as running queues, ready queues, and waiting queues. Waiting queues can be queued for multiple waits, depending on the wait reason .

(3) Index method

For processes with the same status, set their own PCB index table, such as ready Index table, waiting index table, record PCB in the PCB table address.

Process switching

One process yields the processor, and the process of consuming the processor by another process is called process switching.

Process switching allows each process in the system to have a chance to occupy the CPU.

Steps for Process Switching

Save the processor site information for the interrupted process

Modify information about the process control block for the interrupted process, such as process status

Join the Process Control block of the interrupted process to the queue

Select the next owning processor to run the process

Modify the information about the process control block for the selected process

The address translation and storage protection information used by the operating system is set according to the selected process

Recover processor site based on selected process

Control and management of processes

A process is a life cycle: generating, running, pausing, terminating. The process lifecycle is controlled by process management procedures.

Process Control and management includes:

Process creation

Process undo

Process blocking

Process wake-up

Process hangs

Process activation

These control and management functions are implemented by primitives in the operating system.

Primitive language is an inseparable process of executing the kernel mentality and accomplishing the specific functions of the system.

The primitive language is characterized by the execution of the process is not allowed to be interrupted, is an inseparable basic unit, the implementation of the original language is sequential and can not be concurrent.

1. Process Creation

Process creation is similar to when people are born to the police station to report to the account.

Process creation Process:

(1) Add an item to the list of processes, request an idle PCB from the PCB pool, and assign a unique process identifier to the new process;

(2) Assign an address space to the process image of the new process. The process management program determines which programs are loaded into the process address space;

(3) Allocating all the resources required for the new process except the main memory space;

(4) Initialize the PCB, such as process identifier, processor initial state, process priority, etc.

(5) The new process state is set to the ready state and moved into the ready process queue;

(6) Notify the operating system of certain modules, such as accounting procedures, performance monitoring procedures.

2. Process Revocation

After the process finishes its task or a critical error occurs, the operating system calls the process to undo the primitive undo process. The equivalent of a person died, the family to the police station to eliminate the account.

Process Revocation Process:

(1) Find and remove it from the corresponding queue according to the revocation process identification number;

(2) Return the resources owned by the process to the parent process or the operating system;

(3) If the process has a child process, first revoke all its child processes, in case they are out of control;

(4) Recycle the PCB and return to the PCB pool.

3. process blocking and wakeup

When a process that is in a running state cannot continue to run because it waits for an event to occur (such as waiting for a printer), the process calls the blocking primitive to block itself, and the state of the process is transformed from the run to the waiting state (blocking state).

When the wait event completes, an interrupt is generated that activates the operating system, wakes the blocked process under system control, and the process transitions from the blocking state to the ready state.

Process blocking steps:

(1) Stop process execution, save site information to PCB

(2) Modify the process PCB related content, such as process state changed from run to wait state, and the modified state of the process into the corresponding event waiting queue;

(3) Transfer to the process scheduler to schedule other processes to run.

Process wake-up steps:

(1) Remove the process from the corresponding waiting queue;

(2) Modify the relevant information of the process PCB, such as the process status changed to ready state, and moved into the ready queue;

(3) If the wake-up process has a higher priority than the current running process, reset the dispatch flag.

Threads and their implementations

First, the introduction of multi-threaded motives

The process is introduced to enable multiple programs to execute concurrently to improve resource utilization and improve system efficiency.

The thread is introduced in order to reduce the time and space overhead of executing the program concurrently, which makes the concurrency granularity finer and concurrency better.

Two functions of the process

1. The process is the basic unit of resource allocation and protection.

2. The process is also a unit of independent dispatch and allocation.   

Process as a resource owner, in the creation, revocation, switching, the system must pay a large time and space overhead. So the number of processes in the system should not be too much, the frequency of process switching should not be too high, but this also limits the degree of concurrency to further improve.

To solve this problem, it is thought to separate the above two functions of the process, that is, the unit that is the basic unit of dispatch and dispatch, not the same as the independent allocation of resources, the unit that owns the resource, does not switch frequently, the thread produces.

Definition of processes in a multithreaded environment

A process is the basic unit of resource allocation and protection in the operating system, in addition to the processor, which has a separate virtual address space to accommodate process images (such as programs and data associated with processes) and to implement protection for various resources, such as protected access to the processor, files, External devices and other processes (interprocess communication).

1. Threading Concepts in a multithreaded environment

A thread is an entity that can execute concurrently in an operating system process and is the basic unit of processor dispatch and dispatch.

Each process can contain multiple threads that can be concurrently executed.

The thread itself does not own the system resources, has only a few essential resources: program counters, a set of registers, stacks.

The thread sharing process has the main memory space and resources owned by the same process.

2. Benefits of introducing threading

It takes less time to create a new thread (the end is also the case)

Two-thread switching takes less time

Because threads in the same process share memory and files, they communicate with each other without calling the kernel

3. Thread-to-process comparison

Threads have many of the characteristics of a process, which is also known as light processes, and traditional processes are called heavy processes.

In the OS that introduces threading, each process has multiple threads, at least one.

(1) Scheduling

In traditional OS, the basic unit with resource, independent Dispatch and dispatch is a process, in the system that introduces the thread, the thread is the basic unit of dispatch and dispatch, and the process is the basic unit that owns the resource.

In the same process, inline switching does not result in process switching, which causes the process to switch when a thread in one process switches to a thread in another process.

(2) Concurrency of

In a system that introduces threads, the processes can be concurrent and can be executed concurrently between threads in the same process. Therefore, the system has better concurrency.

(3) own Resources

Regardless of the traditional OS, or the introduction of the threading OS, the process is a separate unit of resources, and the thread generally does not have system resources, but it can access the resources of the subordinate process. That is, all the resources for a process are shared by all threads within the process.

(4) System Overhead

The cost of creating and revoking a process is much greater than the cost of thread creation and revocation, when a process switches, the CPU environment of the current process is to be saved, the CPU environment of the new process is set, the thread switches only to save and set a small number of registers, and does not involve storage management operations, visible, Process switching costs much more than the overhead of thread switching.

At the same time, the implementation of synchronization and communication between threads in the same process becomes easier because they have the same address space.

Implementation of Threads

The implementation of multithreading is divided into three categories:

User Level Thread,ult: for the creation, revocation, and switchover of such threads, implemented by the user program, the kernel does not know the existence of user-level threads.

kernel-level threads (Kernel levels Thread, KLT): They are kernel-dependent, that is, whether it is a thread in a user's process or a thread in a system process, their creation, revocation, and switchover are implemented by the kernel.

Hybrid Threading: supports both ult and KLT two threads.

1. User-level threading (ULT)

Management of all threads by the application

Complete with line libraries in user space

The kernel does not know that the thread exists.

Line libraries

Provide a thread to run the management system:

Create, REVOKE threads

Deliver messages and data between threads

Scheduling Thread Execution

Protecting and recovering thread contexts

Advantages and disadvantages of user-level threading

Advantages:

Thread switching does not call the core

Scheduling is application-specific: You can select a good algorithm as needed

Ult can be run on any operating system (requires only line libraries) and can be implemented on an OS that does not support threading

Disadvantages:

Because most system calls are blocked, blocking of one user-level thread can cause the entire process to block.

The core only assigns the processor to the process, and two threads in the same process cannot run on two processors at the same time

2. Core-level threading (KLT)

All thread management is done by the core

There is no line libraries, but the core provides threading API to use threads

The context of the core maintenance process and thread

Core support required for switching between threads

Scheduling on a thread-based basis

Advantages and disadvantages of core-level threading

Advantages:

For multiprocessor, the core can simultaneously dispatch multiple threads of the same process

Blocking is done at the thread level

Disadvantages:

Thread switching within the same process calls the kernel with a large system overhead

3. Hybrid Threading

Supports both user-level threads and kernel-level threads.

Example: Solaris

Processor scheduling

Processor is an important resource in the computer system.

The processor scheduling algorithm has an important influence on the comprehensive performance index of the whole computer system.

Can be divided into three levels of processor scheduling:

Advanced Scheduling

Intermediate scheduling

Low-level scheduling

1. Advanced Scheduling

Also known as job scheduling or long-haul scheduling.

The main function of job scheduling is to select some of the jobs in the fallback queue on external memory based on the job scheduling algorithm, assign them the necessary resources, create the corresponding processes of the job, and finish the work after the completion of the job.

2. Low- level scheduling

Also known as process/thread scheduling, short-range scheduling.

the main function of process scheduling is to select a process/kernel-level thread from the ready queue according to a certain scheduling algorithm to get the processor to use.

Low-level scheduling is the most important part of the operating system, execution is very frequent, and its scheduling strategy directly affects the performance of the whole system.

Scheduling mode for low-level scheduling:

(1) Non-deprivation (non-preemptive type)

Once the scheduler has allocated the CPU to a process/thread, let him run until the process is complete or an event cannot run, and the CPU is allocated to other processes.

This scheduling method is typically used in batch processing systems.

Pros: Simple, low system overhead

Disadvantage: Difficult to meet the requirements of urgent tasks, real-time system should not be used

(2) Deprivation type (preemptive type)

When a process/line is impersonating is executed on the processor, the scheduler can take a certain principle to deprive the CPU of the allocation to other processes/threads.

This scheduling method is usually used in time-sharing systems and real systems.  

Principles of Deprivation:

Priority principle

Short job (process) Priority principle

Time Slice principle

3. Intermediate Scheduling

Also known as balanced dispatch, medium-range dispatch

Involves the process in the internal and external storage exchange, when the main memory resource is scarce, the temporarily not run the process from memory to external memory, at this time the process is in a "suspended" state, when the process has a running condition and the main memory resources have idle, then the city from external memory to ram.

The primary purpose of intermediate scheduling is to improve memory utilization and system throughput.

Low-level scheduling is a must for all types of operating systems, in pure ctss or real-time systems, usually do not require advanced scheduling. General systems have advanced scheduling and low-level scheduling, a fully functional system introduced intermediate scheduling.

The principle of choosing the scheduling algorithm

L. resource utilization (especially CPU utilization)

CPU Utilization =CPU Effective working time/CPU total run time,

CPU Total run time =cpu effective working time +CPU idle wait time.  

2. Throughput Rate

The number of jobs processed in the unit time.

Another important indicator for evaluating the performance of batch processing systems.  

3. Fairness

Ensure that each user has a reasonable share of CPU or other resources per process, and that there is no starvation situation.

4. Response Time

The time interval between the interactive process from submitting a request (command) to receiving a response is called the response time.

The response time includes the time that the request was delivered to the CPU, the time the CPU processed the request, and when the response was echoed back to the terminal display.

To make the response time of interactive users as short as possible, or to deal with real-time tasks as soon as possible, this is an important index of time-sharing system and realtime system measurement scheduling performance.

5. Turnaround Time

The time interval that the batch user submits from the job to the system, and until the job completes, is called the job turnaround time.

includes four-part time: The time the job is scheduled to wait on the external memory fallback queue, the time the process waits for the process to be dispatched in the ready queue, the time the process executes on the CPU, and the time the process waits for the I/O operation to complete.

Should make the job turnaround time or the average job turnaround time as short as possible, which is a batch system to measure the performance of an important indicator of scheduling.

6. Turnaround Time Ti

The moment the job I submits to the system is TS, the completion moment is TF, and the job's turnaround time Ti is: ti = tf–ts.

Turnaround time = Job wait time + job run time.

In order to improve the performance of the system, it is necessary to minimize the average job turnaround time and average time-to-cycle for several users.

Average job turnaround time T

T = (σti)/n

Belt right turnaround time WI

If the turnaround time for job I is TI, the required run time is TK, then

Wi=ti/tk

Average working belt right turnaround time W

W = (σwi)/n

The relationship between the job and the process

1. Basic concepts of operations

(1) Work (Job)

A separate task that the user submits to the operating system calculation.

(2) Work steps (Job step)

A job can be divided into several processing steps, called a job step.

Typical job-control processes:

Compile, link, mount, run

(3) Operation control block (Job controls, JCB)

To effectively manage jobs, you must establish a job control block for each job that enters the system. JCB is established by the spooling system when the batch operation enters the system, it is a sign that the operation exists in the system, and JCB is withdrawn when the job is withdrawn.

JCB holds all the information that the system needs to manage the job.

Organization and management of batch jobs

A job goes through four different states from entering the system to the end of a run:

Input Status:

Fallback Status:

Execution Status:

Completion status:



A job is a task entity, a process is an executing entity that completes a task, no job task, no work to do, no process, and the job task cannot be completed.

Job concepts are used more in batch operating systems, while processes can be used in a variety of multi-channel program design Systems.

Processor scheduling algorithm

First, first Come service

The simplest scheduling algorithm can be used for job scheduling, and also for process scheduling.

Dispatched according to the order in which the job (process) came.

Advantages: easy to implement

Disadvantage: The scheduler chooses the job for the longest time, regardless of the duration of the operation, the algorithm is inefficient;

Conducive to long work, not conducive to short work

Second, shortest operation (process) priority algorithm
Sjf:shortest Job First
Spf:shortest Process First

Can be used for job scheduling and process scheduling.

Estimate the CPU run time of the job (process) and choose the job (process) with the shortest estimated time to run.

Advantages:

(1) easy to implement.

(2) In general, the scheduling algorithm is better than the fcfs of the first service scheduling algorithm.  

Disadvantages:

(1) The execution time of the job (process) is estimated by the user, not necessarily accurate, so the implementation is not necessarily a short job priority scheduling.

(2) unfavorable to long work

If the system continues to accept new jobs, it is possible to keep long jobs from being dispatched for long periods of time. The emergence of hunger phenomenon

(3) Lack of the deprivation mechanism, the time-sharing, real system is still not ideal

Third, the highest response ratio of the first algorithm
(Hrrf:highest Response Ratio first)

Fcfs and SJF are one-sided scheduling algorithms. FCFS only considers the job waiting time and ignores the calculation of the job, SJF only considers the user's estimated job calculation time and ignores the job waiting time.

HRRF is a compromise algorithm between the two, considering both the job waiting time and the running time of the job, which not only takes care of the short job but also does not make long wait time too long, improves the scheduling performance.

Response ratio r = Job turnaround time/job processing time

        = (Job processing time + job wait time)/job processing time

= 1 + (Job wait time/job processing time)

Highest response priority algorithm: Each time the schedule is calculated, the response ratio of all jobs is computed, and the response is selected for the highest dispatch.  

Short jobs tend to get higher response ratios,

Long job waiting time long enough, will also get a high enough response ratio, hunger phenomenon will not occur.

The performance of this algorithm is between FCFS and SJF, which results in a certain amount of time overhead in calculating the response.

Four, priority scheduling algorithm

Prioritize jobs or processes, and select the highest-priority job or process schedule.

1. two different ways

Non-deprivation: After a process is scheduled to run, it runs until it cannot run because of its own cause.

deprivation: When there is a process that is more priority than the running process is ready, the system can forcibly deprive the CPU that is running the process for use by a higher-priority process.

The key to adopting this scheduling algorithm is how to determine the priority of a process, whether a process has a fixed priority after it is determined, or changes as the situation of the process runs.

2. Priority Type

(1) Static priority

The priority is determined at process creation time and remains the same while the process is running.

To determine the priority method:

System determination (internal priority): Consider process run time, resource usage, process type.

User determination (External priority): Consider the urgency of the process, and the billing is related to the process priority.

(2) Dynamic priority

in the process of creating a priority, the system in the process of operation, according to the design objectives of the system, constantly adjust the priority of the process, the advantages of this method is to objectively reflect the actual situation of the process and to ensure that the system design objectives to achieve.

such as long wait time priority can be changed.

Five, time slice rotation scheduling algorithm (Round robin,rr)

Time-slice rotation method commonly used in timeshare systems.

Divide the CPU into a number of time slices, and in sequence assigned to each process in the ready queue, the process takes the CPU, when the time slice runs out, even if the process is not completed, the system also robs the process of the CPU, the process is queued at the end of the ready queue, waiting for the next round of scheduling.

1. Determination of the time slice size

The time slice size has a great effect on the system performance. Time Slice is too small, will lead to frequent switching, increase the system overhead; time slice is too big, rotation time is longer, the process is completed in a time slice, the time slice rotation algorithm is degraded to FCFS algorithm, it can't satisfy the interactive user requirement.

Determine the length of time slices to be considered from the number of processes, switching overhead, system efficiency and response time.

2.RR Improved

RR due to the use of fixed time slices and only a ready queue, so the quality of service is not ideal, further improved along two directions:

(1) Change the fixed time slice to a variable time slice

Introducing variable time slice rotation scheduling algorithm

(2) Change a single-ready queue to a multi-ready queue

Introduction of multilevel feedback queue scheduling algorithm

Six, multilevel feedback queue scheduling algorithm

(Mlfq:multi-level Feedback Queue)

Set up multiple ready queues and assign different priorities to each queue, the first queue is the highest, the second next ... Each queue time slice size is also different, in the higher priority queue, the smaller the time slice allocated for each process.

Processor scheduling selects a process from the first ready process queue, and processes in the same queue are queued by the FCFS algorithm. Select from a lower-level ready process queue only if it is not selected.

When a new process enters memory, it is first placed at the end of the first queue, and when it is executed, the system can be evacuated if the time slice is complete, and if it is not completed, the process is transferred to the end of the second queue ...

The algorithm has better performance and can meet the needs of various applications.

Time-sharing short-term interaction: usually in the first queue to complete the period specified, can make end-user satisfaction.

Short batch jobs: usually the first and second queues are executed one time slice, and the turnaround time is still very short.

Long batch job: it will run once in the 1,2,3...N queue.

Scheduling of processes, threads, and processors

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.