Notes for modern operating systems

Source: Internet
Author: User

Notes for modern operating systems

Summarize some concepts of modern operating systems.

1. process context:

Static description of the entire process

It consists of the user address space content of the process, the hardware register content, and the core data structure related to the process.

User-level context: the user address space of the process (including the user stack layers), including the user body segment, user data segment, and user stack register-level context: program counter, Program Status Register, stack pointer, general register value system-level context: static part ( PCBAnd resource table) dynamic part: Core stack (the stack structure of the core process. Different processes have different core stacks when calling the same core process)

2. Three Methods for constructing a server:

Threads in a process work together rather than in opposition. If the discard is for the application, the thread will discard the CPU. After all, the code is usually written by the same programmer.

Multithreading: parallel and blocking system calls

Single-threaded process: no parallelism, blocking system calls

Finite State Machine: parallelism, non-blocking system calls, interruptions

3. Principles of use of critical zones (mutex)

Any two processes cannot be in their critical section at the same time.

Do not make any assumptions about the CPU speed and quantity

Processes running outside the critical section cannot block other processes

Do not wait for the process to enter the critical section indefinitely

4.Busy waiting: The process continues to test without doing anything else before obtaining access to the critical zone.

5.Primitive: The execution of a program that completes a specific function must be continuous and cannot be interrupted during execution, it can be implemented by shielding the interrupt, testing and setting the interrupt command.

6.Guan Cheng: It is a special module with a name that consists of a Data Structure about shared resources and a set of processes on it.

7.Multi-level feedback queue Scheduling Algorithm:

Set multiple ready queues. assign different time slices to the processes in each ready queue. The first-level queue has the highest priority, and the time slice has the lowest priority. As the queue priority drops, the time slice increases.

When the first-level queue is empty, it is scheduled in the second-level queue, and so on. Each queue is scheduled according to the FIFO + time slice scheduling algorithm, and the last level is RR. When a new process is ready, when the first-level queue process abandons the CPU due to blocking, it enters the corresponding waiting queue. Once the waiting event occurs, it returns to the original ready queue time segment, and the process abandons the CPU, when a process with a higher priority is ready in the next-level queue, it can seize the CPU and the preemptible process will return to the end of the original first-level ready queue.

8.Windows Thread Scheduling:The scheduling unit is a thread. It adopts dynamic priority-based and preemptible scheduling. Combined with time Quota Adjustment, the thread scheduling in Windows single-processor and multi-processor systems is different.

Ready threads enter the corresponding Queue according to their priorities. The system always selects the highest priority ready threads for them to run, and each thread with the same priority is scheduled by time slice rotation, in a multi-processor system, multiple threads are allowed to run concurrently. scheduling Policy: Active switch. preemption: the threads running in the user mode can be used to seize the threads running in the kernel mode. when a thread is preemptible, it is put back to the first line of the ready queue with the corresponding priority. time quota used up:

9.Conditions for deadlock: 1>Mutually Exclusive Use(Exclusive resource): one resource can only be used by one process at a time. 2>Possession and waiting(Partially allocated) A Process maintains its possession of the original resources while applying for new resources. 3>Cannot be preemptible(Denied) Resource applicants cannot forcibly seize resources from resource occupation. resources can only be voluntarily released by the resource occupation. 4>Loop wait: There is a process waiting for the queue

10. Measures to prevent deadlocks:

Up to four philosophers can sit around the table at the same time

A philosopher is allowed to take chopsticks only when both sides of the philosopher are available.

Number all philosophers. Philosophers with odd numbers must first take the chopsticks on the left. Philosophers with even numbers are vice versa.

11.Privileged and non-privileged commands:

Privileged command: Commands that can only be used by the operating system. Why? Introduction: protection, that is, the computer command system that uses multi-channel programming technology must be divided into privileged commands and non-privileged commands. Privileged commands usually cause switching of the processor status.The processor uses a special mechanism to switch the processor status to the privileged state (tube State) of the operating system)And then hand over the processing permission to a special piece of code in the operating system. This process is called ..

Tube state: The operating system management program running status, a high level of privilege, also known as privileged state (Special State), Core State, system stateStatus: The status when the user is running. It has a lower privilege level. It is also known as common state and user State.

Object state → tube state: the only way → interrupt or exception (fall into); tube state → object state: Set PSW (modify program State word)

12.Interruption and exception: the operating system can be considered to be "interrupted (abnormal) driven" or "event-driven ".

Definition of Interrupt (exception): a reaction of the CPU to an event in the system; the CPU stops the program being executed, after the field is retainedAuto-forwarded to execute the corresponding event processing programAfter the processing is complete, the system returns the breakpoint and continues executing the interrupted program.Interrupted(External interruption): it has nothing to do with the command being executed and can be blocked.Exception(Internal interruption): it is related to the Executing command and cannot be blocked.

13.System Call: The user calls some sub-functions provided by the operating system in a program. A special process call is implemented by special machine commands. System invocation is the only interface provided by the operating system to programmers. The system status is transferred from the object state to the tube state. Use System calls to dynamically request and release system resources for hardware-related work and control program execution.

14.File: An abstract mechanism, a set of labeled, logically complete sequences of information items

File System: A software for unified management of information resources in the operating system. It manages file storage, retrieval, and updates, provides secure and reliable sharing and protection measures, and is convenient for users.

Logical Structure of a file:1> stream files: The basic unit of a file is a character. A file is a set of logically meaningful and non-structured strings of characters.Benefits: Provides great flexibility2> Record Files: A file is composed of several records and can be read, written, and searched by record.

15.File Control Block: The file control block is the data structure set up by the operating system to manage files. It is stored as a management file.

All relevant information (file attributes or metadata) required)File directory: Organizes FCB of all files to form a file directory (an ordered set of file control blocks)Cluster:One or more (2 power) consecutive sectors, addressable data blocks

File volumes: Logical partition on the disk, which is composed of one or more clusters. Use the same management data in the same file volume for file allocation and disk free space management, and use independent management data in different file volumes

16.Open the file:Give the file path and obtain the file handle or file descriptor. Read the directory items of the file to the memory ① and query the directory by file path name, locate FCB times (or I node number) ② check the system to open the file table based on the file number to see if the file has been opened; ③ check the access validity based on the opening method, sharing instructions, and user identity; ④ retrieve an empty table item from the user's open file table, fill in the open mode, and point to the corresponding table item of the system open file table

17.File System consistency:Problem generation: Disk block → memory → write back disk block if the system crashes before write back, the file system is inconsistent.Solution: Design a utility. When the system starts again, run the program and check the disk block (√) and directory system.

18.File System write method:

(1)Write-through(Write-through): changes in the memory are immediately written to the disk. Disadvantages: Poor speed and performance. Example: FAT file system (2)Delayed write(Lazy-write): Use the write back cache method to obtain high-speed. Recoverable difference (3)Recoverable(Transaction log): transaction logs are used to write data to the file system. both security and speed performance are considered. Example: NTFS

19.File System Performance: Directory item decomposition, current directory, memory ing File1) block high-speed cache: The system saves some blocks (Block cache) in the memory. Logically, they belong to disks.2) read in advance.Each time you access a disk, read more disk blocks. The reason is that the local space for program execution has low overhead and is targeted.3) reasonably allocate disk space: When allocating blocks, place the blocks that may be accessed sequentially and try to allocate them to the same cylinder to reduce the number of disk arm moves.4) optimal Information Distribution: The arrangement of records on the track also affects the input/output operation time.5) record grouping and Decomposition: Merge several logical records into a group to store one piece.6) memory ing file 7) RAID technology:Redundant Array of Independent Disks, using multiple parallel components for extra performance improvement: 1. by organizing multiple disks together, A logical volume provides a disk-crossing function. 2. By dividing data into multiple data blocks and writing/reading multiple disks in parallel, the data transmission rate (Data split into stripe) is improved) 3. Fault Tolerance (redundancy) is provided through mirroring or verification)8) disk scheduling

20.Disk scheduling:

1) first-come service: service by access request arrival order advantages: simple and fair; disadvantages: Low Efficiency, two Adjacent requests may cause the shortest to the outermost cylinder seek, so that the head is moved repeatedly, increasing the service time, which is not conducive to the mechanical operation.

2) The shortest track time is preferred: first, the Access Request closest to the current head is selected for service. advantage: the average service time of the disk is improved. disadvantage: Some access requests may not be available for a long time.

3) SCAN (ELEVATOR algorithm) 4) C-SCAN: Always SCAN in one direction. When the last track is accessed, the head arm returns to the end of the track in the opposite direction of the disk and starts scanning again. reduces the maximum latency of new requests.

5) FSCAN policy: Use two sub-queues to overcome "head arm stickiness ".

6)Rotation Scheduling Algorithm: Scheduling of execution order based on Delay Time

21.Address protection: Ensure that each process has an independent address space, determine the range of valid addresses accessible to the process, and ensure that the process only accesses its valid address.Address relocation:To ensure that the CPU can correctly access the memory unit when executing commands, You need to convert the logical address in your program to the physical address that can be directly addressed by the machine during running. This process is called address relocation.

22.Basic Memory Management Solution:

1)Variable Partition: Based on the needs of the Process, divide the allocable memory space into a partition and allocate it to the process.

2) Webpage: Divides the user program address space into equal parts, called pages. Memory space is divided by page size into areas of equal size, called memory blocks (physical pages, page boxes, page frames ). Distribute by page. Logically adjacent pages are physically not necessarily adjacent.3) Segment: The user program address space is divided into several segments based on the Logical Relationship of the process itself, and the memory space is dynamically divided into several regions with different lengths (variable partitions ). Memory is allocated in segments. Each segment occupies continuous space in the memory and can be stored continuously between segments.4) segment page: User program address space: Segment; memory space: Page type; Allocation unit: Page

23Exchange Technology: When the memory space is insufficient, the system temporarily moves some processes in the memory to the external memory to swap some processes in the external memory into the memory, occupies the region occupied by the former (Dynamic Scheduling of processes between memory and external memory)

24.Page Replacement Algorithm:OPT: A page that is no longer needed or used in the farthest future after replacement;FIFO: Select the page with the longest residence time in the memory and change it; implementation: the page linked list method;Second Opportunity SCR:Select a page according to the first-in-first-out algorithm, and check its access bit R. If it is 0, the cursor changes to this page. If it is 1, the second opportunity is given, and the access bit defaults 0;Clock Algorithm: Preferentially selects pages that do not need to be written back to the disk, saving time;Recently, no algorithm (NRU) is used): Select a page that has not been used in the recent period and change it. Set the two access bits (R) for the table items on the orders page, and modify the BITs (M );Least recently used algorithm (LRU): Select a page with the longest last access time from the current time and change the page with the longest unavailable time. implementation: Timestamp or a stack for maintaining an access page → high overhead;Aging Algorithm: For LRU improvement, the counter first shifts one R bit to the leftmost end of the counter before adding R

25Bumpy: In virtual memory, pages are frequently scheduled between memory and external memory. If the scheduling page requires more time than the actual running time of the process, the system efficiency decreases sharply. This phenomenon is called bumps or jitter.

26.Working Set:Idea: According to the program's local principle, generally, a process always accesses some pages in a centralized manner over a period of time. These pages are called active pages, if the number of physical pages allocated to a process is too small, so that the active pages required by the process cannot be fully loaded into the memory, the process will be interrupted frequently during operation.

27.Clear policy:The paging daemon is designed. Most of the time it is sleep, it is regularly awakened to check the memory status. if you want to use an existing page box that has been replaced, if it is not overwritten, you can remove it from the free page box buffer pool to restore the page. A dual-pointer clock is used to clear a policy. the front pointer is controlled by the paging daemon: when it points to a "dirty" page, it is written back to the disk, and the front pointer moves forward; when it points to a clean page, only move the Pointer Forward. the back pointer is used for Page Swap, which is the same as the standard clock algorithm.

28Page Buffer:Do not discard the updated pages and place them in one of the two tables: if they are not modified, they will be placed in the idle page linked list. If they are modified, they will be placed in the modified page linked list. the modified page is written back to the disk as a cluster. the page to be swapped is still in memory

29Memory ing File:A process maps a file to a part of its virtual address space through a system call. Accessing this file is like accessing a large array in the memory, rather than reading and writing the file.In most implementations, the page content is not actually read when the page is mapped to the shared page. Instead, the page is read by each page only when the page is accessed, disk Files are used as backup storage. when a process exits or explicitly unmaps files, all modified pages will be written back to the file.

30Memory Manager Composition: Working set manager; process/stack switch; modified page writer; ing page write

Inbound; discard segment thread; zero page thread

31Windows Working Set: Subset of virtual pages residing in physical memory

Process working set: a certain number of page boxes allocated to each process

System Working Set: Paging System Code and data distribution page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.