Summary of "operating system" key points of knowledge

Source: Internet
Author: User

Basic features of the operating system
    1. Concurrency: Multiple programs executed in the same period of time (note differences between concurrency and parallelism, which are multiple events at the same time, which are multiple events within a unified period)
    2. Sharing: Resources in the system can be used by multiple concurrently executing threads in memory
    3. Virtual: Through Time division multiplexing (such as time-sharing system) and space-division multiplexing (such as virtual memory) technology to achieve a physical entity virtual multiple
    4. Async: Processes in the system are performed in a walk-and-stop manner and are propelled at an unpredictable speed
Key features of the operating system
    1. Processor Management: Processor allocation is process-based, so processor management is also seen as a process management. Includes process control, process synchronization, process communication, and process scheduling
    2. Memory management (or memory management): Memory allocation, memory protection, address mapping, memory expansion
    3. Device Management: Manage all peripheral devices, including completing the user's IO request, allocating IO devices to user processes, improving IO device utilization, improving IO speed, and facilitating the use of IO
    4. File management: Manage user files and system files for ease of use while ensuring security. Includes: disk storage management, directory management, file read and write management, and file sharing and protection
    5. Provide user interfaces: program interfaces (such as APIs) and user interfaces (e.g. GUI)
The difference between a process and a thread

Process: A process is a process entity running process, is a system for resource allocation and scheduling of a separate unit (with dynamic, concurrent, independent, asynchronous characteristics, and ready, execute, block 3 states, there are 5 state or 7 state; The resource owns the property of the unit); The process is introduced to enable multiple programs to execute concurrently. To improve the resource utilization and throughput of the system.

Thread: is a smaller unit than the process can run independently, can be seen as a lightweight process (with light entities, independent dispatch units, can be executed concurrently, shared process resources and other properties), the purpose is to reduce the cost of the program during concurrent execution, so that the concurrency of the OS more efficient.

The comparison between the two:
1. Scheduling: In an OS that introduces threads, a thread is a separate dispatch and dispatch unit, and the process is the owning unit of the resource (equivalent to separating the two attributes of a process in a traditional OS that is not introduced to the thread). Because threads do not own resources, they can significantly increase concurrency and reduce switching overhead.
2. Concurrency: In the introduction of the thread of the OS, the process can be concurrent, and a process within the multiple threads can also be concurrent, which makes the OS better concurrency, effectively improve the system resource utilization and throughput.
3. Owning a resource: regardless of whether the OS supports threads, the process is the basic resource-owning unit, and the thread has very little basic resources, but the thread can access the resources of the process under its membership (the code snippet of the process, the data segment, and the system resources it owns, such as FD)
4. System overhead: When creating or revoking a process, the system will create or recycle the PCB, system resources, etc., and also need to save and restore the CPU environment when switching. The switching of threads requires only a small amount of registers to be saved and restored, and does not involve the work of memory management, so the overhead is small. In addition, the unified process of multiple threads because of the shared address space, so the synchronization of communication is more convenient.

Several states of a process

Mainly 3 in the basic State, 5 States and 7 states can read directly
1. Ready state: The process obtains all the necessary resources except the CPU, as long as the CPU can be executed immediately, at this time the process is in the ready state
2. Execution status: The process has gained CPU, is running, in the multi-processing its system, there will be multiple processes in the running state
3. Blocking state: The process is in a blocking state (execution is blocked) when the executing process is temporarily unable to continue execution due to certain events, discarding the processor and pausing

Ready-to-execute: The dispatch process assigns it a processor
Execute-ready: time slices run out
Execution--Block: Request critical resource without being satisfied, such as IO request or request cache
Blocking-ready: Requests are met, such as IO completion

Process synchronization

Multiple processes, while improving system resource utilization and throughput, can cause confusion in the system due to the asynchronous nature of the process. The task of process synchronization is to coordinate the execution order of multiple related processes, so that concurrent execution of multiple processes can effectively share resources and cooperate with each other to ensure the reproducibility of program execution.

The principle that synchronization mechanisms need to follow:
1. Idle access: When no process is in the critical section, you should allow other processes to enter the critical section of the application
2. Busy wait: If there is a process in the critical section, if there are other processes requesting entry, you must wait, to ensure mutually exclusive access to the critical section
3. Limited wait: For the process requiring access to critical resources, it is necessary to enter the critical zone at a limited time to prevent the occurrence of death
4. The right to wait: when the process is unable to enter the critical section, the need to release the processor, the side caught busy and so on

Classic process Synchronization issues: producer-consumer issues; philosophers ' dining problems; reader-Writer's question

Inter-process communication

Process communication refers to the exchange of information between processes, exchanging information can make a state, or it can be a lot of byte. Inter-process synchronous mutual exclusion also exists the exchange of information, so also belongs to an IPC, belonging to the low-level communication. The problem of this low-level communication: 1) The amount of data in the communication is too small, 2) the communication is opaque to the user (data transfer or synchronous mutex requires programmer implementation)

Advanced communication Mechanisms (communication details for advanced communications are hidden by the OS, so it is easier to use and can transmit large amounts of data, especially for pipeline communication):
1. Shared Memory systems: processes that communicate with each other share some data structures or storage areas that can be communicated between processes through these shared spaces. Divided into: 1) communication based on shared data structures, such as bounded buffers in producer consumer systems; 2) traffic based on shared storage can transmit large amounts of data, and the process of communication can read and write to shared storage like normal memory.
2. Messaging system: Interprocess communication uses formatted messages that can be sent directly using the messages provided by the OS or receive primitives for communication. Simplifies communication program complexity due to hidden communication details
3. Pipeline communication: A pipeline is a shared file that connects two read processes and one write process to enable data exchange. In order to coordinate the pipeline communication between the two sides, the pipeline mechanism is required to implement the following functions: 1) Mutual exclusion: Only one process can read and write to the pipeline at a time of unification; 2) synchronization: When the read side finds that the pipeline is empty, it needs to wait until the data is awakened, and the corresponding write end waits until the pipe is full ; 3) Determine the existence of the other side: only at the same time the reading and writing end, the pipeline has a meaning

Process/Task Scheduling algorithm

Basic Scheduling algorithm:
1. First come first service scheduling algorithm FCFS: Can be used as a job scheduling algorithm can also be used as a process scheduling algorithm, according to the order of work or process arrived in sequence, so for long operations more advantageous;
2. Short job priority scheduling algorithm SJ (P) F: Job scheduling algorithm, the algorithm selects the shortest estimated time job from the ready queue to process, until the result or unable to continue execution; disadvantage: it is not conducive to long-term work, the importance of the operation is not considered;
3. High priority priority scheduling algorithm HPF: can be used as job scheduling or process scheduling algorithm; When scheduling a job, the highest priority job is selected from the ready queue for processing; Because of the priority involved, it can be divided into preemptive and non-preemptive type And priority determination can also be classified as static priority (a fixed value is determined in advance based on process type, process requirements for resources, user requirements, etc.); Dynamic priority (increases or decreases with process push or wait time)
4. High corresponding ratio algorithm HRN: response ratio = (wait time + request service time)/request service time;
5. Time-Slice rotation scheduling RR: On the arrival of the process into the queue, and then assign the team first process CPU time slice, the time slice out after the timer issued an interrupt, pause the current process and put it to the end of the queue, loop
6. Multilevel feedback Queue Scheduling algorithm: It is generally accepted that the scheduling algorithm is better. Set up multiple ready queues and set different priorities for each queue, with the first queue having the highest priority and the rest decreasing in turn. The higher the priority of the queue allocated a shorter time slice, the process arrives after pressing FCFS into the first queue, if the schedule is not completed after execution, then put to the second queue tail waiting for dispatch, if the second schedule is still not completed, put the third queue tail .... The process of scheduling the next queue only occurs when the current queue is empty.

Real-time Scheduling algorithm:
1. Earliest deadline priority scheduling algorithm EDF: The algorithm determines the priority based on the start deadline of the task, and the earlier the deadline, the higher the priority level. The algorithm maintains a real-time ready queue, the task of the earliest deadline is the first, and can be used for preemptive scheduling or non-preemptive scheduling;
2. Minimum relaxation priority scheduling algorithm LLF: Relaxation = (must complete time-itself run time-the current time); The algorithm determines the priority of the task according to the relaxation of the task, the relaxation degree represents the urgency of the task, the higher the urgency of the task, the higher the priority assigned.

The necessary conditions for deadlocks and how to handle them

A deadlock is a deadlock in which multiple processes are running, as a result of contention for resources, and if there is no external push, the deadlock process cannot continue.

Deadlock Reason:
1. Competing resources: The number of processes that request the same limited resources more than the number of available resources
2. Process advance sequence illegal: In process execution, request and release resource order is unreasonable, such as resource waiting chain

The necessary conditions for deadlock generation:
1. Mutex condition: The process uses exclusive use of the allocated resources
2. Request and hold condition: the resource requested by the lock is not released when the process is blocked
3. Inalienable conditions: The process may not be deprived of the resources that have been applied for before the use is completed
4. Loop wait Condition: a process that exists when a deadlock occurs-a resource ring-waiting chain

Deadlock handling:
1. Prevent deadlocks: One or more of the 4 necessary conditions that cause a deadlock to occur; It is simpler to implement, but if the restrictions are too restrictive, system resource utilization and throughput can be reduced
2. Avoid deadlocks: In the dynamic allocation of resources, to prevent the system from entering an unsafe state (may generate a deadlock state)-such as banker algorithm
3. Detect deadlocks: Allow the system to generate a deadlock during operation, after the deadlock occurs, a certain algorithm to detect, and determine the deadlock-related resources and processes, take the relevant method to clear the detected deadlock. Very difficult to achieve
4. Unlock deadlocks: Work with deadlock detection to free the system from deadlocks (undo process or deprive of resources). The processes and resources associated with detection and deadlock are freed up and assigned to a blocked process by a revocation or hang-up, which turns it into a ready state. Very difficult to achieve

Dead Lock: S is a sufficient condition for a deadlock state when and only if the resource allocation diagram of S is not completely simplified

Memory Management Methods-segment page and Segment page style

Due to the low memory utilization and memory fragmentation problems caused by continuous memory allocation (single continuous allocation, fixed partition allocation, dynamic partition allocation, dynamic relocation partition allocation), a discrete memory allocation method is introduced. Discrete memory allocations can be managed from the memory management perspective of the OS (the basic unit of discrete allocation is the page), or from the programming point of view (the basic unit of discrete distribution is segment) management.

Basic Paging Storage Management

In basic paging storage management, there is no page replacement (that is, no virtual memory is implemented), so all pages of the entire program need to be loaded into memory before they can be run. Because the program data is stored in different pages, and the pages are scattered in memory, a page table is required to record the mapping between the logical address and the actual storage address to achieve the mapping from the page number to the physical block number . Because page tables are also stored in memory, access to memory data in the paging system requires two memory accesses (one is to access the page table from memory, to find the specified physical block number, and the page offset to get the actual physical address compared to the storage method that does not apply to paging management) The second time is to access the memory from the first physical address to fetch the data).
In order to reduce the efficiency impact caused by two access memory, the paging management introduced a fast table (or Lenovo Register) mechanism , including the fast table mechanism of memory management, when the memory data to access, first the page number in the Fast table query, if found to find instructions to access the page table entries in the Fast table, Read the corresponding physical block number directly from the fast table, if not found, access the in-memory page table, get the physical address from the page table, and add the mapping table entry in the page table to the Fast table (there may be a fast table swap algorithm).
In some computers, if the logical address of the memory is very large, will cause the program's page table entries will be many, and the page table in memory is continuous storage, so the corresponding need for large contiguous memory space. In order to solve this problem, we can adopt the method of Level two page table or multilevel page table , in which the Outer page table is transferred into memory and stored continuously, and the Inner page table is separated. The corresponding access to the memory page table requires an address transformation, access to the logical address corresponding to the physical address of the time also need to address the transformation, and altogether need to access memory 3 times to read the data once.

Basic Segmented Storage Management method

Paging is to improve memory utilization, and fragmentation is to meet the programmer's logic requirements when writing code (such as data sharing, data protection, dynamic linking, etc.).
Segmented memory management, the address is two-dimensional, one-dimensional is a paragraph number, one-dimensional is the address of the paragraph, where each segment length is not the same, and each paragraph inside is starting from 0 address. Because of segmented management, each segment is internally contiguous memory allocations, but the segments and segments are distributed in a discrete way, so there is also a mapping of the logical address to the physical address, corresponding to the segment table mechanism. Each table entry in the Segment table records the start address of the segment in memory and the length of the segment. The segment table can be placed in memory or in registers.
When accessing memory, the position of the current access segment in the Segment table is calculated based on the length of the segment number and the segment table, then the Segment table is accessed, the physical address of the segment is obtained, and the memory needs to be accessed based on the physical address and the offset within the segment. Because it is also two memory accesses, the associative registers are also introduced in segmented management.

Comparison of segments and pagination:
1. The page is the physical unit of information, which is a discrete allocation mechanism proposed by the point of view of the system memory utilization; segment is the logical unit of information, each segment contains a set of meaningful information, is the memory management mechanism proposed by the user angle
2. The size of the page is fixed, determined by the system; the size of the segment is indeterminate and is determined by the user
3. The page address space is one-dimensional, and the segment address space is two-dimensional

Segment-page Storage Management

The user program is divided into several segments, then each segment is divided into pages, and each segment is given a segment name. In the section page management, a memory address consists of a segment number, a page number within a paragraph, and a three-part address in the page.
Segment-page Memory access: A segment table register is set up in the system, storing the starting address of the segment table and the length of the segment table. Address transformation, depending on the given segment number (also need to compare the segment number and the length of the segment table in the register to prevent out of bounds) and the register of the paragraph table start address, can get the paragraph corresponding to the paragraph table entry, from the paragraph table entry to get the corresponding page table of the starting address, and then use the logical address of the page number from the page table to find the page table entries , the physical address is stitched out from the physical block address in the page table entry and the address in the page in the logical address, and the required data is finally accessed with this physical address. Because access to one data requires three memory accesses, the cache registers are also introduced in the section-page management.

Virtual memory and page replacement algorithm

If there is a program that requires more memory than the actual memory the computer can provide, it cannot run because the program cannot load memory. Simply adding physical memory can only solve some of the problems, but there will still be problems that cannot be mounted in a single or that cannot load multiple programs at the same time. However, the two problems can be solved by expanding the memory capacity from a logical point of view.

Virtual memory is a kind of memory system which has the function of request call-in and the permutation function, which can be expanded logically. Virtual memory is built on the basis of discrete memory management

Characteristics of virtual Memory:
1. Multiple sex: A job can be transferred into memory several times. Multiple properties are unique to virtual storage
2. Swap: The process of operation in the process of swapping out (in exchange for temporarily unused data into the required data)
3. Virtualization: Virtualization manifests itself in its logical expansion of memory capacity (applications that can run actual memory requirements larger than physical memory). Virtualization is the most important feature of virtual memory and its ultimate goal. The virtual nature is built on the basis of multiple and shifting, and the multi-sex and the swapping are based on the discrete distribution.

Page replacement algorithm
    1. The best permutation algorithm: A theory-only algorithm used to evaluate other page substitution algorithms. The displacement strategy is to displace pages in the current page that will not be accessed for the longest time in the future.
    2. FIFO permutation algorithm: A simple and rough permutation algorithm, which does not take into account the frequency of page access information. Each time you retire the first page you paged in
    3. The most recent unused algorithm LRU: The algorithm gives each page an Access field to record the last page was accessed to the current time of the T, each time the replacement of the T-value of the page to replace the largest (implementation can be implemented in the form of registers or stacks)
    4. Clocking algorithm clock (also known as the most recently unused algorithm NRU): The page is set to one access and the page is linked to a ring queue, and the page is accessed when the access bit is set to 1. When the page is displaced, if the current pointer refers to a page that is accessed as 0, then displace it, or set it to 0, looping until it encounters a page that accesses bit 0
    5. Improved clock algorithm: a modified bit is added on the basis of the clock algorithm, and the Genjiu access bit and the modified bit are synthetically judged when replacing. The priority substitution access is why the modified bits are 0 pages, followed by the access bit to 0 to modify the 1-bit page.
    6. Minimum usage algorithm LFU: Sets the number of times the Register record page is accessed, and replaces the current number of accesses at the time of each permutation. The problem is that the access register does not really reflect the current page accesses, because the access speed is faster, so the time interval for the update register is accessed 1 times and access 100 times is the same. In addition, LFU and LRU are very similar, support hardware is the same, but the key to distinguish between the two is a time-based, one by the frequency of the standard (for example, the Register PA 001111 and PB 111000, two pages, if the use of LRU, then eliminated is PA, If use LFU then be eliminated is Pb).
    7. Page buffer algorithm PBA: When the replacement page, whether or not modified, is not replaced by the disk, but first persisted in the memory of the page linked list (modified page linked list and unmodified page list, can not be differentiated) inside, when it is accessed again when it can be directly from these linked list out without disk IO, Write disk operations (equivalent to merging multiple IO into one) after the list has been modified and the number of times is difficult to reach a certain number

Summary of "operating system" key points of knowledge

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.