The implementation of five kinds of process scheduling algorithms (i.)

Source: Internet
Author: User

Experimental requirements

1, based on Event-driven(event-driven) to achieve the simulation process scheduling, including

    • Minimum work priority (SJF);
    • Shortest remaining time first (SRTF);
    • Highest response ratio priority (HRRF);
    • Priority scheduling (priorities);
    • Rotation scheduling (RR).

Among them, SJF, sRTF is non-preemptive scheduling, the rest is preemptive scheduling.

2, requires C language implementation of these five scheduling algorithms. (For convenience, a C + + header file is introduced using cout for output)

Basic knowledge first, the process1.1 Meaning of the process

Broadly speaking, a process is a running activity of a program with independent functions about a data set.

A process is an abstract concept, an abstraction of a program. A program is a series of code or instructions that is static stuff, just like a book put there.

When will the program "move"? Of course we carried it out. For example, with the mouse double-click Word shortcut, then after a while, Word program will appear in front of you. The process, not "You" executes the program, but you let the computer execute the program. How does the computer execute the program? As we all know, this involves its interior, even its brain, the central processing unit (CPU).

What does the CPU do? The CPU is used to execute specific instructions. The above said that the program is like a book, then the role of the CPU is equivalent to the reader, the text equivalent to the instruction in the book, the CPU "read" the "book", some sentiment, to do a considerable thing, to complete a considerable task.

For example, to use a book to describe the instructions, then the recipe is more appropriate.

Take recipes, for example, to do a dish now, according to the 10 steps on the recipe. Here, when cooking, the first to prepare ingredients, spices, etc., this is not a directive is what? In fact, a program file (executable file) in a computer contains data, such as an application's icon, in addition to the code. Next, "You"--CPU, will follow the steps to do the dishes. You have to wait while you are cooking. A series of steps down, finally, a good dish in front of your eyes, color and taste. This confirms the definition of "independent function" meaning, to do a special dish, is to complete a task independently.

1.2 Status of the process

The simplest generalization (three-state model) is that the state of the process is divided into the waiting state, the ready state, and the running state. the five-state model has more new state and termination state than the tri-State model.

Take a multi-course dish as an example. Process is equivalent to the process of cooking.

    • Waiting state--a dish may need to be boiled, steamed, simmered for a while, then it needs to be completed;
    • Ready state--a dish boiled, steamed, simmered, need you to deal with, but you are now doing another dish, have not had time to switch over;
    • Running state--fry the current course of a dish;
    • New state--ready to make this dish;
    • Termination state-This dish is finished and there are some finishing touches to do.
1.3 Characteristics of the process

The process features four points: dynamic, concurrency, independence, and Asynchrony.

Continue to stir-fry multi-course dishes for example.

    • Dynamic-the whole process of cooking during the day, the process is "live", because "you" are working. In the evening, the kitchen is empty, the process is "dead", the kitchen facilities are back to the original position, as no one here fried vegetables. The process has a state switch between the day and night, so it is dynamic;
    • Concurrency-You can fry multiple courses at the same time;
    • Independence-The seasoning you prepare is prepared with the need for a dish, not a piece of meat inside. The steps in the recipe must be orderly, not less chaotic. A recipe to complete a dish, different recipes completed different dishes. The process of frying the dish does not affect the course of your cooking.
    • Asynchrony--the process of each dish being cooked is intermittent (because it is fried multi-course), the interrupted time is unknown. The overall time of the multi-course cooking is not the same, which also reflects the unpredictable nature of fried multi-course dishes.

The above is my metaphor, and the text description is like this:

    • Dynamic--The essence of the process is the process of a procedure in a multi-channel program system, since it is the process of the beginning and end, so the process will be produced, will die out. After the process has been executed, it will generally not leave a trace of its operation;
    • Concurrency-Any process can be executed concurrently with other processes, with only one program running at any time (to differentiate from parallel);
    • Independence--The process is a basic unit that can run independently, and also is an independent unit of allocating resources and dispatching of the system.
    • Asynchrony-Because of the inter-process constraints, the process has a discontinuity of execution, that is, the process at its own independent, unpredictable speed ahead.
1.4 Scheduling of processes

Process scheduling is divided into preemptive and non-preemptive (or deprivation and non-deprivation).

Preemptive scheduling, as its name "preemption", grab their own occupation, so the embodiment of two points-" Rob " and " accounted for."

What is a robbery? According to the algorithm, the task power or high priority, will naturally "Rob" the opportunity to run. If at that time the system is not running other tasks, then the task A is run, if the system is running other task B, then according to the algorithm, if the task A has "Rob" qualification, it is like a grab stool, a grabbed, B did not grab, then paused to run B, began to run a.

What is accounted for? When a task takes a running opportunity (that is, CPU time), it occupies a corresponding run time. During this time, the system only runs the task until the next time the system uses the algorithm to reassign the task.

Therefore, just as the grab stool, Rob stool is also a way to allocate resources, who size, strength, short reaction, the opportunity to seize more. In this, there are many people rob the same stool, this is the "competition", "good" stool, rob more people, the more fierce competition. The loser is reconciled, wants to win in the next fight as soon as possible, the winner maintains the superiority, the strategy despises the loser, but cannot arrogant.

Here is the " queue ". If you want to complete a series of items, take the recipe as an example, for a total of 10 steps. You now write these 10 steps on 10 pieces of paper, and then put them all stacked up, then the current pile of the top of the sheet is your current steps to do Step1, you put Step1 finished, put Step1 discard, then the paper pile top is STEP2, so analogy, finally left STEP10, After the paper was finished, the team was listed as "empty".

Either preemptive or non-preemptive, run them according to the order of the tasks in this queue. Therefore, the queue represents the "life cycle" of the task. As long as the task is still in the queue, the task is not "dead", that is, no end to run, and the task is not "born", that the task has not yet started running.

Now, the point is how to arrange this queue, which refers to the above five scheduling algorithms. "Scheduling" can be seen as a way to arrange queues.

is based on the queue. Preemptive scheduling means that once a task A has been run, it may be preempted by other task B, and now it is time to move a to the bottom of the heap and b to the top of the heap; non-preemptive scheduling means that once a task a runs, it runs a until the end, and if task B also wants to start, it can only put B under a That is, the order in which the tasks are run is determined from the time they are joined to the queue, and the order of the queues does not change.

Second, thread 2.1 meaning of thread

A thread is the smallest unit of a program's execution flow, and is the basic unit of Dispatch and dispatch independently by the system.

Just as a process is an abstraction, a thread is an abstraction, a more highly abstract. The original process can only complete a task at the same time, and now the demand is much, so that a process to complete multiple tasks at the same time, if there is no thread concept, then the same process of the task can only be run serially (sequentially run).

Now, consider the process as a system, then the process "process" is the thread. The process was originally the basic unit of task scheduling, and the process was too "big" compared to threads, so the thread granularity was smaller compared to the thread.

2.2 Status of the thread

The state of the thread is the same as the state of the process, and it is a five-state model that can refer to the process's five-state model.

2.3 Features of the thread
    • Thread is the basic unit of operating system dispatch;
    • A thread's state switching is faster and less expensive than the process;
    • Threads do not own resources, only an abstraction of the task, and threads within the same process share the resources of the process;
    • For a single-core CPU, only one thread can be run at a time;
    • A process has at least one thread, and it is its main path.
2.4 The difference between a thread and a process
    • Processes are independent of each other (independent of resources), and resources are shared among the threads of a process (if they cannot be shared, it is not so much different from the process);
    • Because of the independence of the process, when the process to communicate with each other, the system can only provide a variety of external methods, more cumbersome, and the communication between the threads can be achieved by sharing data;
    • A thread's state switching is faster and less expensive than a process;
    • In a multithreaded system, a thread is an executable object, because a thread is an abstraction of a concurrent task in a process. Originally, the process was the main body of the running task, and after having a thread, the burden of running the task fell on the thread.
2.5 Advantages of threading
    • Make full use of CPU resources (this is mentioned in Hyper-threading technology);
    • The implementation of intra-process concurrency, the use of the granularity of the task is more granular, conducive to the developer of the task of decomposition, abstraction (decomposition and abstract principles);
    • It realizes the processing of asynchronous event in process, especially GUI event, server application and so on.
    • Improve the operation efficiency of the program.
Three, synchronous and asynchronous 3.1 synchronous meaning

synchronization, simply put, is called a process, if the process is still in the execution state, there is no return results, then before the process returns, you can not continue to do the next thing.

For example, for example, to wear socks before wearing shoes, the order can not be reversed, so I summarize the characteristics of synchronization is: order, certainty, simplicity .

Order, that is, the sequence of operations will not be reversed, everything in order;

Certainty is that the sequence of operations is deterministic and predictable, as is the case with mathematical computations, and the final answer is definite;

Simplicity, because synchronization does not involve the concept of threading, when running a simple single-threaded process, the system-provided thread synchronization mechanism (API) is not used.

3.2 Meaning of Async

Asynchronous is relative to synchronization, meaning is the opposite of synchronization. That is, when a procedure is called, the following things are done, and the return value of the procedure is not immediately obtained.

For example, using a microwave to heat food, press the "Start" button, the microwave oven began to run. At this point, you do not have to wait for it to heat, can do other things. When you hear a "ding", you know that when the heat is over, you will get the food.

Here are a few details: you go to do something else and you hear "ding" and run over.

You do something else, so now a total of two things are being done in the process of abstraction into a thread, that is, two threads running at the same time;

When you hear a "ding", you run over and the process returns, but you don't have to wait for the process to return, and this "return" is ingenious .

So, to achieve asynchrony, you have to learn to do two things-add new tasks (create threads), know that the old tasks are over and run (status, notifications, and callbacks).

3.3 Ways to implement Async

The creation thread can be implemented by invoking the API, so how do you know that the old task is already running?

state , which is a shared variable (FLAG), at the end of the old task, the variable is valid, then the old task ends, and the new task loop detects whether the variable is valid;

notification , that is, like downloading software, downloaded, the system will notify you that the old task to the new task to send notifications or messages, after the end of the old task;

The callback is to give the old task the finishing touches after the old task is done, so that the old task finishes after finishing the work.

Of the three methods mentioned above,"callback", the old task has nothing to do with the new task; "Notification", the old task and the new task are directly linked; "State", the old task and the new task have indirect contact, through the state variable.

Callback, the new task can be done regardless of the old task;

notification, the new task must wait for a notification to be interrupted, and if so, deal with it. When waiting, the new task is usually in the blocking state;

State, the new task does not have to wait, it only needs to be detected in time (judging) whether the state variable is valid (that is, whether it is a valid value), this method is polling.

Iv. implications of concurrency and parallel 4.1 concurrency

When there are multiple processes running, if the system is a single-core CPU, it is simply not possible to actually run more than one process at the same time. The system can only divide the CPU run time into several time periods (using the scheduling algorithm to assign tasks at the beginning of each time period), and then assign each time period to each process execution. During a time period, when a process is running, other processes are in a suspended state (ready state). In this way we call it concurrency (Concurrent).

First, the process scheduling described above involves process concurrency. The essence of concurrency is to allocate time slices, which are intermittent on the microscopic level, and are continuous on a macro scale. If the system has a total of 26 processes, in a time period, only run process a, and then run process B in the next time period, until the process z is run, then all the processes have been run. Process is "attention" of the logo is its CPU time, more time, running long, the faster the progress. The order of the current A-Z is just an example, which rarely happens in real life situations.

In the definition above, the concept of a single core is mentioned.

Single core is distinguished from multicore, so to understand that at some point, a single core CPU can only do one task, and multi-core CPUs may do multiple tasks. The more tasks are done, the faster the task progresses and the more efficient the system will be to complete the task.

There is also a concept of Hyper-threading (hyperthreading technology).

Hyper-Threading technology is through the use of special hardware instructions, you can simulate two logical cores into two physical hyper-threading chip, in a single processor to achieve thread-level parallel computing, while the corresponding hardware and software support greatly improve the performance of the two processors on a single processor to simulate the performance of the dual processor. In fact, in essence, Hyper-Threading is a technology that can fully "mobilize" the temporary idle processing resources within the CPU.

While hyper-Threading technology is capable of executing two threads at the same time (if each process can only run one of its sub-threads, it is equivalent to executing two processes at the same time), it is not like two real CPUs and each CPU has independent resources. When two threads require a resource at the same time, one of them is temporarily stopped and the resources are given up until the resources are idle before they can continue. Therefore, hyper-threading performance is not equal to the performance of two CPUs.

The difference between Hyper-threading and multicore depends primarily on the independence of the resource. When the two threads that are running belong to the same process, then for Hyper-Threading technology, there is a situation where the resources of the two threads conflict, and for multicore, this is not the case, so two threads run in two different CPUs, even though they belong to the same process.

4.2 Meaning of parallelism

When the system has more than one CPU (that is, multicore), the operation of the thread may not be concurrent. When one CPU executes a thread, another CPU can execute another thread, and two threads do not preempt CPU resources, which can be done simultaneously , this way we call it parallel ( Parallel).

Parallelism is the concept of multi-core CPUs. Concurrency is micro-discontinuity, macro-continuous, and parallel to the micro-"continuous" a step closer.

4.3 The difference between concurrency and parallelism

The two concepts of concurrency and parallelism are easily confused.

Parallelism refers to the occurrence of two or more events at the same time, and concurrency refers to two or more events occurring at the same interval.

Concurrency is the product of a single-core CPU, micro-discontinuous, macro-continuous, parallel is the product of multi-core CPUs, microscopic more continuous, more continuous on the macro.

Take a few coins for example, a total of a bunch of coins. Concurrency is a number of a person, and parallelism is a number of multiple b1-bn.

Yes, for a. Select one: Tale countersunk number; Select two: Divide the coins into roughly the same n parts, and in a few moments, count the copies.

Yes, B. b divides the coins into roughly the same n parts, so B has n individuals, so the n individuals count the N-coins, because of the strength of the people, so B is counted before a.

The choice of a is a single-channel program, the choice of two for a multi-channel program is concurrency; b is parallel.

    • Parallel programs are more efficient than concurrent programs, the resource utilization is high, but the programming is more complicated, the result is unpredictable, debugging is difficult;
    • In parallel there will be contention for resources, and concurrency will not (at some point only one program exclusive resources), so there will be mutual exclusion and synchronization problems, as well as the resulting deadlock problem;
    • Concurrency is the product of a single-core CPU, micro-discontinuous, macro-continuous, parallel is the product of multi-core CPUs, microscopic more continuous, more continuous on the macro.
4.4 Deadlock 4.4.1 Definition of the deadlock

Specification definition for deadlocks: Each process in the collection waits for an event that can only be raised by another process in this collection, then the group process is deadlocked.

Four necessary conditions for the 4.4.2 deadlock to occur
    1. Mutually exclusive condition: refers to the process of allocating the resources allocated to the exclusive use, that is, for a period of time a resource is occupied by only one process. If there are other processes requesting resources at this time, the requestor can wait until the resource-occupying process is freed.

    2. Request and hold condition: means that the process has maintained at least one resource, but a new resource request has been made, and the resource has been occupied by another process, at which time the request process is blocked , but the other resources that you have obtained remain .

    3. Non-deprivation condition: refers to the resources that the process has obtained, cannot be deprived before it is exhausted, and can only be released by itself when it is exhausted.
    4. Loop wait Condition: When a deadlock occurs, there must be a process- the circular chain of the resource, that is, the P0 in the process collection {p0,p1,p2,,pn} is waiting for a P1 to occupy the resource, P1 is waiting for the resource to be occupied by P2 ..., PN is waiting for resources that have been P0 occupied

The implementation of five kinds of process scheduling algorithms (i.)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.