Implementation of Five process scheduling algorithms (I) and five Process Scheduling Algorithms

Source: Internet
Author: User

Implementation of Five process scheduling algorithms (I) and five Process Scheduling Algorithms
Lab requirements

1. Based onEvent-Driven(Event-driven) Implement simulated process scheduling, including

  • Minimum work priority (SJF );
  • The shortest time is preferred (SRTF );
  • Highest Response Ratio (HRRF );
  • Priority Scheduling (Priority );
  • Rotation scheduling (RR ).

Among them, SJF and SRTF are non-preemptible scheduling, while the rest are preemptible scheduling.

2. These five scheduling algorithms must be implemented in C language. (For convenience, the C ++ header file is introduced to use cout for output)

 

Basic knowledge I. Process1.1 process description

Broadly speaking, a process is a running activity of an independent program about a data set.

A process is an abstract concept and an abstraction of a program. A program is a series of code or commands. It is a static thing, just like a book.

When will the program be "dynamic? Of course, we executed it. For example, double-click the Word shortcut with the mouse, then the Word program will appear in front of you in a short time. The process is not "you" executes the program, but you let the computer execute the program. How does the computer execute a program? As we all know, this involves its internal, or even its brain-the central processor (CPU.

What is the CPU? CPU is used to execute specific commands. As mentioned above, the program is like a book, so the role of CPU is equivalent to the reader, and the text in the book is equivalent to instructions. After the CPU reads the book, I feel something, to accomplish a considerable task.

For example, if you want to use a book as an example of instruction, you can use a recipe.

Take recipes as an example. For example, if you want to cook a dish now, follow the 10 steps on the recipe. Here, when cooking, you must first prepare ingredients, spices, and so on. This is not a command. What is going on? In fact, program files (executable files) on a computer contain code and data, such as application icons. Next, "you"-CPU, You need to cook according to the steps. You have to wait before cooking. After a series of steps, a good dish is placed in front of your eyes. This confirms the meaning of "independent functions" in the definition. To make a special dish is to complete a task independently.

1.2 Process status

The simplest Summary (three-state model) is:The status of a process can be divided into waiting, ready, and running states.. The five-state model is more than the three-state model.New StateAndTermination status.

The following example shows how to stir-fry multiple dishes at the same time. The process is equivalent to the process of cooking.

  • Waiting State-a dish may need to be boiled, steamed, and steamed for a while, and then it needs to be completed;
  • Ready-you need to process a certain dish after it is boiled, steamed, and steamed. However, you are currently working on another dish and have no time to switch it over;
  • Running State-stir-fry a current dish;
  • New state-preparing to cook this dish;
  • Terminating state-this dish is finished and some final work should be done.
1.3 process features

There are four features of a process:Dynamic, concurrency, independence, and Asynchronization.

The following example shows how to stir-fry multiple dishes at the same time.

  • Dynamic: the whole process of cooking during the day is "active" because "you" is working. In the evening, there was no one in the kitchen, and the process was "dead". The facilities in the kitchen were all back to their original state, just as no one had fried food here. The process has a status switch between one day and one night, so it is dynamic;
  • Concurrency-you can fry multiple dishes at the same time;
  • Independence-the seasoning you prepare is prepared based on the needs of a dish, rather than a piece of meat in it. The steps in the recipe must be step by step. One recipe is used to complete one dish. different recipes are used to complete different dishes. The process of frying this dish does not affect the process of frying it;
  • Asynchronization-the cooking process of each dish is interrupted (because it is fried with multiple dishes), and the interruption time is unknown. The overall time for each stir-fried multi-course is different, which also reflects the unpredictability of the stir-fried multi-course.

The above is my metaphor, and the text description is as follows:

  • Dynamic -- the essence of a process is a process of execution of a program in multiple program systems. Since it is a process that must begin and end, the process will generate and die out. After a process is executed, there is generally no trace of its running;
  • Concurrency-any process can be concurrently executed with other processes. Concurrency means that only one program is running at any time (different from Parallelism );
  • Independence-processes are the basic unit for independent operation and the independent units for system resource allocation and scheduling;
  • Asynchronization-processes are interrupted due to mutual constraints, that is, processes are pushed forward at an independent and unpredictable speed.
1.4 Process Scheduling

Process Scheduling can be divided into preemptible and non-preemptible (or depreemptible) processes ).

Preemptible scheduling, just as its name is "preemptible", is occupied by itself, so it is embodied in two points-"GrabAndAccount".

What is snatching? According to the algorithm, if the task has a large weight or a high priority, it will naturally "grab" the opportunity to run. If no other task is run at the time, task A is run. If the system is running other Task B at the time, follow the algorithm, if task A is eligible to be "snatched", it will be like grabbing A stool. If task A is grabbed and Task B is not, stop Task B and start task.

What is the proportion? The task occupies the corresponding running time after obtaining the running opportunity (that is, the CPU time. During this time, the system only runs the task until the next time the system re-allocates the task running time using the algorithm.

Therefore, just like grabbing a stool, grabbing a stool is also a way to allocate resources. There are many opportunities to grab a stool because of its large size, great strength, and short response. In this case, many people grab the same stool. This is competition. There are many good stools, and there are more people to compete. The loser is unwilling to win in the next competition as soon as possible. The winner maintains his or her advantage and has a strategic contempt for the loser, but cannot be arrogant.

Here we will talk about"Queue". If you want to complete a series of things, taking recipes as an example, there are 10 steps in total. Now you have written these 10 steps on 10 pieces of paper and stacked them one by one. Then, the top piece of paper in the current paper stack is the step you are going to do. Step 1, after you finish Step 1 and discard step 1, the top of the paper heap is step 2, and so on. Finally, step 10 is left. After that, all the pieces of paper are lost. At this time, the queue is "empty ".

Both preemptible and non-preemptible tasks must be run in the order of tasks in the queue. Therefore, the queue represents the "Life Cycle" of the task ". As long as the task is still in the queue, the task is not "dead", that is, the task is not "Born", that is, the task has not started running.

Now, the focus is on how to arrange this queue, which mentions the above five scheduling algorithms. "Scheduling" can be seen as a way to arrange queues.

Based on queues. Preemptible scheduling means that once job A is run, it may be preemptible by other job B. Now we need to move job A to the bottom of the heap and B to the top of the heap; non-preemptible scheduling means that once task A is run, task A continues to run until the end. If Task B also wants to start running, Task B can only be placed under Task, that is, the task running sequence is determined from the time they are added to the queue, and the queue sequence will not be changed in the future.

Ii. Meaning of thread 2.1

A thread is the smallest unit of the program execution flow and the basic unit of independent scheduling and distribution by the system.

Just like a process is an abstraction, a thread is also an abstraction and a higher abstraction. In the past, a process could only complete one task at a time, but now there is more demand, prompting a process to complete multiple tasks at a time. If there is no thread concept, in this case, tasks in the same process can only be run in sequence ).

Now, when we regard a process as a system, the "process" of the process is the thread. The process was originally used as the basic unit of task scheduling. Compared with the thread, the process is too large. Therefore, the thread granularity is smaller.

Status of 2.2 threads

The thread status is the same as the process status. It is also a five-state model. You can refer to the five-state model of the process.

Features of 2.3 threads
  • A thread is the basic unit of operating system scheduling;
  • The thread status switching is faster and has less overhead than the process;
  • A thread does not possess resources, but is an abstraction of a task. threads in the same process share resources of the process;
  • For single-core CPUs, only one thread can be run at a time;
  • A process has at least one thread and is its main thread.
2.4 differences between threads and processes
  • Processes are independent of each other (resources are independent), and the threads of a process share resources (if they cannot be shared, it is no different from the process );
  • Due to the independence of processes, when processes need to communicate with each other, the system can only provide various external methods, which is cumbersome, and inter-thread communication can be achieved through data sharing;
  • The thread status switching is faster and has less overhead than the process;
  • In a multi-threaded system, a thread is an executable object, because a thread is an abstraction of concurrent tasks in a process. Originally, the process was the main body of the running task. With the thread, the burden of running the task falls on the thread.
Advantages of 2.5 threads
  • Make full use of CPU resources (as mentioned in the following hyper-Threading Technology );
  • It achieves intra-process concurrency and uses the task granularity to be further subdivided, which is conducive to developers' decomposition and abstraction of tasks (Principles of decomposition and abstraction );
  • Implements asynchronous event processing in processes, especially GUI events and server applications;
  • The program running efficiency is improved.
Iii. Meaning of synchronous and asynchronous 3.1 Synchronization

Synchronization, simply put, is to call a process, if the process is still in the execution status, no results are returned, then the next thing cannot be continued before the process is returned.

For example, if you need to wear so first and then shoes, the sequence cannot be reversed. So the general characteristics of synchronization are as follows:Sequence, certainty, simplicity.

Order, that is, the running order will not be reversed, and everything will come in order;

Certainty means that the running sequence is definite and foreseeable. Just like the mathematical calculation process, the final answer is definite;

Simplicity, because synchronization does not involve the concept of a thread. When running a simple single-threaded program, the thread synchronization mechanism (API) provided by the system is not used ).

3.2 meaning of Asynchronization

Asynchronization is relative to synchronization, meaning the opposite of synchronization.That is to say, when a process is called, the following things will be done and the return value of the process will not be obtained immediately.

For example, if you use a microwave oven to heat food, press the "Start" key to start the microwave oven. At this time, you will not wait for it to heat up, you can do other things. When you hear a "ding", you know that the heating is complete and you will take out the food.

There are several details involved: You can do other things, and you can run when you hear a "ding.

When you do other things, there are two things that are being done, abstracted into a thread, that is, two threads are running at the same time;

When I heard a "ding", I ran over, indicating that the process was returned, but you didn't wait until this process was returned. This "return" isClever.

Therefore, to implement Asynchronization, you must learn to do two things: Add a new task (create a thread), know that the old task has ended and run to process (status, notification, and callback ).

3.3 Asynchronous Method

The creation thread can be implemented by calling the API. How can I know that the old task has ended?

StatusThat is, set a shared variable (FLAG). When the old task ends, the variable sets a valid value. After the old task ends, the new task cyclically checks whether the variable is valid;

NotificationLike downloading software, the system will notify you that the old task sends a notification or message to the new task. After the new task is sent, the old task ends;

CallbackIt is to hand over the finishing work of the old task to the old task itself, so that the old task will end after it is finished.

Among the above three methods,"Callback" indicates that the old task has no relationship with the new task; "notification" indicates that the old task has a direct connection with the new task; "status" indicates that the old task has an indirect connection with the new task, through status variables.

During callback, the new task can ignore the old task;

During the notification process, the new task must wait for a notification. If there is a notification, process it. When waiting, the new task is generally inBlockingStatus;

Status, the new task does not have to wait for the operation. It only needs to check (judge) whether the status variable is valid (that is, whether it is a valid value) in a timely manner. This method isRound Robin.

Iv. Meaning of concurrency and parallel 4.1 concurrency

WhenMultiple processesIf the system isSingle-coreIt is impossible to run more than one process simultaneously. The system can only divide the CPU running timeSeveral Time Periods(Use the scheduling algorithm to allocate tasks at the beginning of each time segment), and then assign each time segment to each process for execution. During a period of time, when a process is running, other processes arePending status(Ready ). This method is calledConcurrency(Concurrent).

First, the process scheduling mentioned above involves process concurrency.The essence of concurrency is the distribution of time slices, which are intermittent at the micro level and continuous at the macro level.. If the system has A total of 26 processes, only process A is run within A period of time, and then process B is run in the next period of time until process Z is completed, all processes have been run. The mark of a process being "valued" is its CPU time. When the time is too long, the process runs for a long time and the progress is faster. The order of the current A-Z is just an example, which rarely happens in real situations.

The single-core concept is mentioned in the above definition.

Single-coreYes andMulti-coreThe difference is that at some point in time, a single-core CPU can only perform one task, while a multi-core CPU can do multiple tasks. The more tasks, the faster the task progress, and the higher the efficiency of the system to complete the task.

Another concept is:Hyper-threading(Hyperthreading Technology).

Hyper-Threading Technology uses special hardware commands to simulate two logical kernels into two physical hyper-threading chips to achieve parallel line-level computing in a single processor, at the same time, the running efficiency is greatly improved with the support of the corresponding hardware and software, so as to simulate the performance of Dual-processor on a single processor. In fact, in essence, hyper-threading is a technology that can fully "mobilize" The resources temporarily idle inside the CPU.

Although hyper-threading technology can be used to execute two threads at the same time (if each process can only run one of its subthreads at the same time, it is equivalent to being able to execute two processes at the same time ), however, unlike two real CPUs, each CPU has its own resources. When both threads need a certain resource at the same time, one of them must be temporarily stopped and the resources must be made available until these resources are idle. Therefore, the performance of hyper-threading is not equal to the performance of two CPUs.

The difference between hyper-threading and multi-core depends on the ResourceIndependence. When the two running threads belong to the same process, the resources of the two threads may conflict with each other for hyper-Threading Technology, therefore, two threads run in two different CPUs, even if they belong to the same process.

4.2 meaning of Parallelism

When the system has more than one CPU (multiple cores), the operations on the thread may not be concurrent. When one CPU executes one thread, another CPU can execute another thread,Two threadsMutualNot preemptibleCPU resources, yesAt the same timeThis method is calledParallel(Parallel).

Parallelism is the concept of multi-core CPU. Concurrency is micro-interrupted and macro-continuous, while parallel is one step closer to "continuous" at the micro-level.

4.3 difference between concurrency and Parallelism

The two concepts of concurrency and parallelism are easy to confuse.

Parallelism refers to the occurrence of two or more events at the same time; concurrency refers to the occurrence of two or more events within the same time interval.

Concurrency is the product of Single-core CPU, with micro-level interruptions and macro-level continuity. parallelism is the product of multi-core CPU, with micro-level continuity and macro-level continuity.

Take the coin as an example. There are a total of coins. Concurrency is the number of A person, parallel is the number of B1-Bn.

For. Select one: Count the coins from the beginning to the end; select two: divide the coins into roughly the same N portions, count the coins in a moment, and count the coins in a moment.

For B. B divides coins into roughly the same N parts. Therefore, B has N people, so these N people count the N coins separately. Because of the large amount of power, B counts more than.

Select A as A single program, select two as multi-program, and select B as A parallel program.

  • Parallel Programs are more efficient than concurrent programs and have high resource utilization, but programming is more complex, and the results are unpredictable and debugging is difficult;
  • There will be competition for resources in parallel, and concurrency will not (only one program exclusive resource at a time point), so there will be mutual exclusion and synchronization problems in parallel, as well as the resulting deadlock problem;
  • Concurrency is the product of Single-core CPU, with micro-level interruptions and macro-level continuity. parallelism is the product of multi-core CPU, with micro-level continuity and macro-level continuity.
4.4 deadlock 4.4.1 deadlock Definition

Standard Definition of deadlock: every process in the set is waiting for an event that can only be triggered by other processes in the set, then the process in this group is deadlocked.

4.4.2 four conditions for deadlock

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.