Operating system Learning---process management (II.)

Source: Internet
Author: User
Tags semaphore

    • Points:
      1. Fundamentals: Process Description and control
      2. Policy: Process scheduling
      3. Implementation: Mutual exclusion and synchronization
      4. Avoidance: Deadlock and starvation
      5. Solution: Several classic questions
    • The introduction of processes
      1. Sequential execution of programs
        • Source code program, target program and executable program
        • Program execution: Edit, compile, link, execute
        • Structure of the program: order, branching, looping structure
        • Characteristics of program execution: sequential, closed, reproducible
      2. Program Concurrency execution
        • Multi-Channel Program design Technology: Concurrent execution of multiple programs
        • Characteristics of program concurrency execution: Intermittent, non-closed, not reproducible
        • issues raised by concurrent execution:
          • Coordinate the order of execution of each program: The calculation must wait for the input data not all in memory
          • Multiple executing programs share system resources, and programs can affect each other and even affect the output
          • Select those, how many programs go into memory execution
          • In-memory execution program who executes first, who executes after
          • How is memory allocated efficiently?
    • Problems caused by the introduction of processes
      1. Increased space overhead: building data structures for processes
      2. Additional time overhead: manage and coordinate, track, fill and update data structures, switch processes, protect the site
      3. More difficult to control: coordinate multiple processes to compete and share resources to prevent and resolve multiple integrations due to competing resources
      4. The competition for processors is particularly pronounced
    • Structure of the process
      1. Composition (process image): program, data collection, Process control block PCB
      2. The PCB is the only sign that the process exists. The PCB is created when the process is created, and when the process is finished, the system will revoke its PCB
      3. Information in the PCB: process name, process status, process priority, semaphore, on-site protection, process parameters, process program address
    • Pcb
      1. Process identity information: internal to the process (the system assigns it the label) and external identifiers (name of a person)
      2. Processor status information: General register value, instruction counter value, program status Word PSW value, user stack pointer value
      3. Process scheduling information: Process status, process priority, additional information for process scheduling
      4. Additional Information: program and data pointers, process synchronization and communication mechanisms, resource lists, connection pointers
    • How the PCB is organized
      1. Single queue: The PCB of all processes is organized into a single queue through the linked list. Applies to systems with a very large number of processes. such as the Windows operating system

2. Table structure (find more efficient)

The PCB is organized into different tables by process state: Ready process table, Execution process table (in multi-machine system) and blocking process table

The system separately records the starting address of each PCB table.

3. PCB multistage Queue

    • Problems caused by multiple processes
      1. Multiple processes competing for memory resources
      2. Tight Memory resources
      3. No ready process, processor idle: I/O speed is much slower than processor speed, may occur all process blocking wait I/O
      4. Workaround:
        1. swap technology : Swap out part of the process to external memory to free up memory space
        2. Adopt Virtual Storage Technology : Each process can load only part of the program and data (Storage Management Section)
    • Control of the process
Two modes of execution

1. System mode (also known as System State), control mode, or kernel mode

① has higher privileges

② Run System-specific instructions, including instructions for read-write control registers, basic IO instructions, and storage-management-related instructions and some specific memory areas

③ processor in kernel mode and its instructions, registers and memory are fully controlled and protected

2. User mode (or user state)

① Lower Privileges

② users typically run in user mode

Mode switching

User--System: User executes to a system call and enters operating system kernel execution

System-User: Performs the function of system call and returns to the user program

Special case: When the program executes to the closing sentence, switches to system mode and no longer returns to the user program

Process Scheduling

Scheduling refers to the process of selecting a suitable individual in a queue, according to a method (algorithm).

The key to scheduling is the need for some method or algorithm, good scheduling algorithm is conducive to the selection of suitable individuals

Scheduling objectives: fairness, improving processor utilization, increasing system throughput, minimizing process response events

Scheduling principle :

1. Meet the user's requirements:

Response time: Consider making the majority of users ' requests possible within the response time, often used to evaluate the performance of timeshare systems

Turnaround time: The time interval at which the job is submitted to the system and the completion of the job, evaluating the performance of the batch processing system

Deadline: In real-time systems, the latest time a task must start executing, or the latest time it must be completed, is commonly used to evaluate the performance of real-time Systems

2. Meet the requirements of the system:

System throughput

Processor utilization

Balanced use of all types of resources

Fairness and Priority

   Dispatch mode:

1. Non-deprivation

Execute or request IO to block yourself

Not conducive to the "timeliness" of high demand for time-sharing and realtime systems, mainly for batch processing systems

2. Ways of Deprivation

The operating system can schedule a new process execution when a new process arrives, or when a blocked process with a higher priority is inserted into the ready queue, or in a time-slice-based system, the time slice is exhausted and the execution of the current process is interrupted. In this way, more interrupts are generated, mainly for real-time systems with higher real-time requirements and higher performance-demanding batch processing systems.

   Scheduling type:

1. Batch scheduling, time-sharing scheduling, realtime scheduling and multiprocessor scheduling

2. Long-range scheduling (long-term scheduling, external memory to memory):

Also known as advanced scheduling or job scheduling, it creates a process for the scheduled job or user program, allocates the necessary system resources, and inserts the newly created process into the ready queue, waiting for short-range scheduling

Some systems that use switching technology insert newly created processes into a ready/Suspended queue, waiting for a medium-range dispatch

In a batch system, after the job enters the system, it resides on the disk and organizes the batch processing queue, called the fallback queue. Long-range Scheduling select one or more jobs from this queue to create a process for it

Questions to consider :

How many ① are selected to enter memory--depending on the degree of the multi-channel program, the number of processes that allow simultaneous running in memory

② which jobs to choose: Depending on the long-range scheduling algorithm

3. Medium-range scheduling (between external memory memory between processes)

Also known as intermediate scheduling

When the memory space is tight, or the processor cannot find an executable ready process, it needs to select a process (blocking or ready state) to swap out to external memory, free up memory space for another process, and when the memory space is more abundant, the process of selecting a pending state from external memory to memory (swap in);

Purpose: In order to provide the utilization of memory and the throughput of the system

        Medium-range scheduling is only available for operating systems that support process hangs

4. Short-range dispatch (in-memory):

Also becomes a short-range dispatch, or low-level dispatch, which determines which process in the ready queue will get the processor

Short range dispatch runs most frequently

        Almost all modern operating systems have short-range scheduling functions

5. Io scheduling (similar tracks)

Process scheduling algorithm

1, FCFS (first come first service, at the same time suitable for three kinds of scheduling ):

The non-deprivation dispatch mode, realizes simple, seemingly fair

    Note : It may take a long time to wait for a short-running process or IO-type process to enter the queue

Unfair to the segment process

2. Short process priority (improvement to FCFS)

Non-deprivation

Difficult to accurately predict the execution time of a process

may lead to long process starvation

The use of non-deprivation scheduling method, not considering the urgency of the process, not suitable for time-sharing system and transaction processing system

3. Time-Slice Rotation scheduling method

A dramatic increase in the number of users

Time slice size can affect processing performance

① process switching adds additional overhead to the system

② too long, too short, not good.

③ need to consider the system maximum number of users, response time, system efficiency and other factors

For short, computationally-based processes are more advantageous

process that is not suitable for IO type

one of the improvements : The process of IO blocking time can be organized separately into a ready queue, the time slice of the queue process can be set a little bit, and priority scheduling

4, priority-based scheduling algorithm

Prioritize the process

The importance of the ① process completion function

② the urgency of the process completion function

③ to balance the use of system resources, specify process (Job) priority

④ the degree to which the process consumes resources, for example, a short process (or job) can be given a higher priority

Static and Dynamic priority

Dynamic Priority:

Shortest remaining time preferred (deprivation type)

High response ratio is preferred

Process priority is proportional to wait time

Difficult to accurately estimate the tone execution time of the process

Calculate response ratio increases system overhead

5. Feedback Scheduling method

Scheduling based on execution history rather than in the future will solve this problem

The scheduling method of scheduling is adjusted according to the execution history of the process, which combines the methods of priority and time-slice rotation scheduling.

Facilitates interactive short process or batch jobs, they generally need only one or several time slices to complete

Could be a sharp increase in turnaround time for long processes

If new processes continue to come in, it could also lead to chronic hunger in the process

Different time slices can be set for each queue, and the lower the priority time slice the longer

Mutex and synchronization of processes

  Question: How can I reconcile multiple processes to compete and share system resources such as memory space, external devices, etc.? How to solve the problem that the execution result of multiple processes is not stable and invalid because of competing resources?

Concurrency control:

1. Competitive Resources

      1. It could cause a deadlock.
      2. Some resources must be used mutually exclusive-critical resources
      3. The code that accesses the critical resource is called the critical section.
      4. At any moment, only one process is allowed to enter the critical section, which enables the process to mutually exclusive access to critical resources

2. exclusive use of the critical section

1. When a process needs to use a critical resource, it is achieved by obtaining access to the critical section.

2. First, in the entry area to determine whether to enter the critical section, if you can enter, you must set the critical section use flag, prevent other subsequent process into the critical section. The subsequent process, by looking at the flag of the critical section, knew that it was not able to enter the critical section and went into the blocking queue, blocking itself

3. When the process in the critical area is finished, exit the critical section by modifying the critical section usage flag in the exit area and are responsible for waking up a process in the blocking queue to enter the critical section

4. The "Critical zone use flag" must be guaranteed to be a global variable that can be shared by all processes in the system, and the modification of the flag by the process must be mutually exclusive

3, the use of the critical area principle

1. Allow only one process to enter the critical section at a time (busy waiting)

2. The process can only stay within the critical zone for a limited time and must not allow other processes to wait indefinitely outside the critical zone (limited wait)

3. If the critical section is idle, allow it to enter as soon as there is a process request (idle let in)

4. The process of entering the critical area cannot be blocked for a long time in the critical area waiting for an event, must exit the critical section within a certain period (right to wait)

5. Cannot limit the progress of process execution and the number of processors

4. Competitive resources may cause deadlocks

5. Competing resources may cause hunger

6. Concurrency control-co-ordination

1. Multiple processes often need to co-modify some shared variables, tabular file database, etc., collaborate to complete some functions

2. You must ensure that they are correct when modifying shared variables, and that the data is fully

3. Shared collaboration also involves mutually exclusive deadlocks and starvation issues, when more emphasis on writing to the data must be mutually exclusive

4. Consistency of data must be ensured (bank deposit withdrawal balance)

5. Generally through transaction processing to ensure the consistency of data, the process of entering the critical section must complete the modification of these columns of data at once

6. Only after the process exits the critical section, other processes are allowed to enter the critical section for data modification to ensure data consistency.

7. Concurrency Control-Communication collaboration

When a process is cooperating in communication, the processes need to establish a connection, and the communication process needs to be synchronized and coordinated. Processes communicate in many ways, including messaging, pipelines, shared storage, and so on.

When you implement process communication through messaging, there is no need for mutual exclusion because there are no shared resources, but deadlocks and starvation can still occur

Communication deadlock with hunger

8, mutual exclusion and synchronization of the resolution strategy

      Software approach

By the process itself, by executing the corresponding program directives, to achieve synchronization with other processes in mutual exclusion, without the need for specialized programming language or operating system support

It is difficult to properly control synchronization and mutual exclusion between processes, and may significantly increase the additional overhead of the system

      Hardware methods

Control synchronization and mutual exclusion by shielding interrupts or by using specialized machine directives

Reduces system overhead, requires too strong hardware constraints, and can cause process starvation and deadlock, without becoming a common solution

      Signal Volume method (emphasis)

Special support by the operating system, or specialized programming languages, including semaphore methods, process methods, and message delivery methods

General methods

      Pipe-Path method

Message Delivery Methods

9. Software method

Dekker algorithm

Peterson algorithm

Initial assumption: Control two mutual exclusion into the critical section, you can let two processes turn into the critical section

Guaranteed mutual exclusion.

appear busy and other phenomena

A must wait for B to use the next time.

Reference:

Operating System Principles learning notes--process management

Operating system--process management

Process management of the operating system

Process and thread of computer operating system

Operating system--process management-integrated programming class other synthesis

Operating system Learning---process management (II.)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.