Processes, threads, and CPU scheduling

Source: Internet
Author: User
Tags win32 cpu usage

I. Concept of the process

A process is an executing program that forms the basis of all calculations. A more complete explanation is a run-time activity of a program with independent functionality about a data set. It can apply and own system resources, is a dynamic concept, is an active entity. It is not just the code of the program, it also includes the current activity, which is represented by the value of the program counter and the contents of the processing register.


ii. Status of the process

There are five states of the process, namely:

NEW: The process is being created

Run : instruction is being executed

wait : The process waits for an event to occur (such as I/O completion or signal received)

ready : Process waits for the processor to be allocated

Terminate : Process completes execution

The changes to these five states are shown below:




Third, Thread

Threads consist of thread IDs, program counters, register collections, and stacks. It shares code snippets, data segments, and other operating system resources with other threads that belong to the same process.


A thread is a basic unit of CPU scheduling, and a process is a basic unit of system resource allocation, and a process may contain multiple threads.


Multithreaded programming has four advantages, such as high responsiveness, resource sharing, economic and multi-processor architectures. While we can have many threads in our user process to execute, these user threads are actually mapped to kernel threads to execute. The mapping relationship model between the user thread and the kernel thread contains:

(1) Many-to-one

(2) One-to-one

(3) Many-to-many


Line libraries provides programmers with the API to create and manage threads. There are two main methods to implement line libraries, one is the local function library of user space (non-system call), and one is the kernel level library (system call) which is directly supported by the system. There are currently three types of thread libraries, as follows:

(1) API used under POSIX Pthread:linux, with user-level and kernel-level libraries

(2) kernel-level line libraries under Win32:windows

(3) Java: User-level libraries, Java virtual machines usually call the libraries on the host, the Win32 API is used on Windows.


Iv. CPU Scheduling

We usually say that the CPU scheduling is to talk about process scheduling, in fact, now support a process containing multi-threaded operating system CPU scheduling objects are targeted to the kernel thread, but the scheduling algorithm is the same. Below we all use the process dispatch to say the scheduling algorithm. The execution of each process is composed of several CPU intervals and I/O intervals, the I/O constraint program usually has a very short CPU interval, the CPU constraint program may have a small number of long CPU intervals, understanding the process of this distribution helps to select the appropriate scheduling algorithm.


CPU scheduling decisions occur in the following four scenarios:

(1) Run-to-wait

(2) run-to-ready

(3) Wait-and-ready

(4) run-to-end

For both (1) and (4) The running process is either a resource condition is not sufficient, or the end of the run, this time need to select a new ready queue process into the running state, so only scheduling, no choice, for both (2) and (3) there is a process into the ready state, then you can choose. When scheduling can only occur in both (1) and (4) cases, the scheduling scenario is called a non-preemption schedule (the presence of a new ready process does not affect the running process), otherwise called a preemption schedule (the new ready process may preempt the current running process's CPU resources).


In order to compare different scheduling algorithms, many scheduling criteria have been generated, as follows:

CPU Usage : Need to make the CPU as busy as possible

Throughput : The number of processes completed within a single time unit

turnaround time : The time period from process submission to process completion

wait time : The sum of the time spent waiting in the ready queue

response Time : From submitting a request to generating the first corresponding time

A good scheduling algorithm maximizes CPU usage and throughput, minimizing turnaround time, wait time, and response time. The following describes the specific scheduling algorithm.


1, first come first service (FCFS)

The simplest CPU scheduling algorithm, according to the process into the ready queue sequencing process execution. However, the average wait time for this method is usually longer.


2, the shortest operation priority scheduling (SJF)

Each process is associated with the next CPU interval segment, and when the CPU is idle, he is assigned to the process with the shortest CPU interval. If two processes have the same CPU interval length, you can use FCFS to dispatch. The CPU interval length here is the next CPU interval length of the process, not the total CPU interval length of the process. This algorithm proves to be the best, and the difficulty is how to know the length of the next CPU interval of the process. The approximate case can only be done by prediction.


3. Priority scheduling

Each process has a priority associated with the process with the highest priority being assigned to the CPU, and processes with the same priority are dispatched according to the FCFS order. In fact, short-job priority is a special case of priority scheduling algorithm.

One of the main problems with the priority scheduling algorithm is that the low-priority process creates an infinite block or starvation (which can run but lacks CPU), and one of the problems that it solves is aging, which gradually increases the priority of the process that waits for a long time in the system.


4. Time slice rotation scheduling (RR)

Designed for time-sharing systems, similar to FCFS, but each process allocates only a CPU that does not exceed one time slice, and the processes in the ready queue are sequentially allocated for execution.


5. Multi-level queue scheduling

The processes in the ready queue are permanently assigned to multiple independent queues depending on the properties of the process, each with its own scheduling algorithm. There are also scheduling in the queue, usually with fixed priority preemption scheduling.


6. Multilevel Feedback Queue Scheduling

Similar to multi-level queue scheduling, the biggest difference is allowing processes to move between queues. The main idea is to differentiate processes based on the characteristics of different CPU intervals, and if the process uses too much CPU time, it will be transferred to the lower priority queue.

Multilevel feedback queue Scheduling is the most common CPU scheduling algorithm but because of the many scheduling parameters, it is also the most complex algorithm.


Five, multi-processor scheduling

If there are multiple CPUs, the load distribution becomes possible, and the corresponding scheduling problem becomes more complex. As with single-processor CPU scheduling, there is no best scheduling algorithm. There are usually two methods for CPU scheduling on multiple processors, as follows:

Asymmetric multi-processing : Allows one processor to handle all scheduling decisions, I/O processing, and other system activities, while other processors execute only user code. This approach is simpler because only one processor accesses the system data structure, reducing the need for data sharing.

symmetric multi-processing (SMP): Each processor is self-dispatching, and the scheduler checks the common-readiness queue by each processor and chooses a process to execute. Today's operating systems generally support this approach.


Because of the CPU scheduling, so a process usually does not run on the processor one time, the middle may give up the CPU, for symmetric multi-processing scheduling method, it is possible to migrate from one processor to another processor execution, and process-related data in the original processor cache, which will lead to cache invalidation and rebuild , the cost is high, so most asymmetric systems try to make a process run on the same processor, which is called processor affinity.


By providing multiple physical processors, the SMP system allows several threads to run concurrently. Another option is to provide multiple logical processors rather than physical processors, which is called symmetric multithreading (SMT). also called Hyper-threading technology. SMT is actually generating multiple logical processors on a physical processor, rendering a view of a multi-logical processor to the operating system, and each logical processor has its own schema state, including general purpose and machine State registers. But this technique requires hardware to provide support, not software, and the hardware should provide a representation of the architecture state of each logical processor and interrupt handling methods.







Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.