Operating system Learning-operating system introduction and memory management

Source: Internet
Author: User
Tags compact

!: What is an operating system?
The operating system does not have a complete, precise, and generally accepted definition because the operating system is a complex system software whose outer edges or boundaries are not very clear. But it can be defined from the perspective of function, from the perspective of function, can be divided into internal and external, external to the application: for the application, the operating system is a control software: can manage the application, including the application of the start, interrupt, suspend, kill, etc. and can provide a variety of services for the application, such as: Network card services, sound card services, IO services. Internal hardware resources: For hardware resources, the operating system is a resource management software: the ability to coordinate the allocation of resources for a variety of applications, including: CPU resources, memory resources, various peripheral resources.
!: Software classification?
Software can be divided into application software and system software, common application software such as Office software, audio and video entertainment software. System software is divided into two kinds: tool-type system software, such as compiler, dynamic program library, etc., operating system software, such as Kernel,shell, etc. (digression: Kernel is mainly the management of computer hardware resources, which of the three major management? CPU resource management, memory resource management, hard disk resource management. These three parts are abstracted at the operating system level: process management, address space management, and file management, respectively.
!: What are the characteristics of the operating system (kernel)?
Concurrency (in contrast to parallelism, where multiple processes are running over time, parallelism means that there are multiple processes running at a certain point in time, typically, a single core within a CPU is concurrency, and the execution of a process in multiple cores of a CPU is parallel)
Share last name (seemingly multiple programs "at the same time" access to hardware resources, but in fact is mutually exclusive shared, that is, for the mutex resource, a point in time only one process access to resources, such as memory resources access: for the entire memory resources are parallel access, However, it is mutually exclusive access for a memory unit of the smallest granularity in memory.
Virtualization (leveraging multi-channel program design techniques to make each user (process/program) feel that the computer is acting as if it were a separate service)
Async (The execution of the program is not consistent, but a walk-stop, forward speed is unpredictable, but as long as the program runs the same operating system environment, the OS needs to ensure that the program runs the same result)
!: Learning the operating system should pay attention to the problem?
Many of the current textbooks or teaching content has a certain aging, many of the previous needs of operating system management content, and now almost no need to consider, such as process scheduling, disk IO management, textbook knowledge is lagging behind our current hardware development, the textbook is a very important issue, it is not so important now
!: a chicken Soup ~
I've heard it, I'll forget it.
I've seen it, I'll remember it.
I can only understand what I've done.
!: Operating system family?
UNIX family: Mac OS, etc.
Linux family: Ubuntu,fedora,andriod, etc.
Windows family: Windows 10,WPH, etc.
Other operating systems: industrial areas, automation, mainframe operating systems, etc.
!: Startup of the operating system?
When you press the power supply, the following series of actions are performed:
BIOS (Basic input/output System) power-on self-test (check whether various hardware devices are working properly, such as checking the video card is not there or something)
After the device is checked properly, the BIOS loads the bootloader (used to load the OS) into memory
Load the OS from the hard disk into memory by bootloader, and give the CPU control over to the OS
System boot
!: What is a system call?
The so-called system call, which is the application to the operating system issued by the service request, is the application to the operating system issued an operation instruction (the operating system to the underlying hardware access is encapsulated, and provides external access to the interface, those API is the system call API, Often there is a layer of better API on top of these system invoke APIs, where applications are accessed by accessing this higher level API, such as the Win32 Api,posix API for Windows posix-based System (including Unix,linux,mac All versions of OS x, etc.), to implement system calls)
!: What is an exception?
The so-called exception, which is an unexpected instruction generated by the application, causes an error, such as an illegal instruction or other bad processing state (e.g., memory error), which has to be handled by the operating system.
!: What is interrupt?
The so-called interruption, (interruption from the peripheral, open to discussion) is required when the CPU stops the execution of the current program to execute or process the new situation
!: Why is the application designed to be called a hardware system without direct access?
The first is for security reasons, for computer hardware, the operating system can be trusted, and third-party applications are not trusted, these applications may be malicious to destroy the computer's hardware and software systems
Another reason for the ease of application development is that if you give the application direct access to the hardware, it will add additional development and maintenance costs to the application, and if built on top of the operating system, it will significantly reduce development costs because the operating system is already well encapsulated in the underlying hardware, and the application simply invokes the system call. Request operating system services to
!: How does the operating system handle interrupts and exceptions?
The operating system maintains a table dealing with interrupts and exceptions, the operating system for each interrupt and exception are numbered, the table key is these numbers, value is to handle the interrupt and exception of the service program start address, when an interruption occurs, the table is directly checked, and then by the response of the interrupt handler processing can
The interrupt process is detailed as follows (the process of interruption is transparent to the user):
* Set Interrupt flag: Set the interrupt flag for internal or external events and get to the ID of the interrupt
* Save current processing status: execution progress of the program (the address of the next instruction), relevant register contents
* Check the table according to the interrupt ID, find the interrupt handler, handle the interrupt
* Clear Interrupt Flag
* Restore the previous saved processing site, continue the execution of the program
The exception is handled as follows:
*get to Exception ID
* Save current processing Status: Program execution progress (i.e. next instruction address), related register contents
* Exception handling: Either kill the abnormal program or re-execute the instructions you just made
* Resume the scene and continue to interrupt the execution of the Money program
!: Operating system software structure?
Kernel:) system call:) High Level API:)
System invocation is the encapsulation of the kernel service, the high-level API is the encapsulation of the system call API
In general, applications use high-level APIs to implement system calls, but they can also directly access the system call API or directly access the kernel, except that the latter two are a small part of the application
!: What is the user state and kernel state?
User state: The application grasps the state of CPU execution, in this state, its execution power is relatively low, such as cannot directly access the IO, and can not execute some privileged instructions, such as suspending a process or something
Kernel state: The operating system kernel grasps the state of CPU execution, in which case the CPU can execute any instruction because it has all execution rights
The application can change the CPU from the user state to the kernel state through system call.
One application performs a function call with a higher performance than a system call: Because executing a function call is done in the program's current stack, while executing a system call, you need to switch to the kernel stack, and also complete the privilege level transformation, which is to convert from a user state to a kernel state. Switching stacks and privilege levels can be costly, so performance is slightly lower than general function calls.
!: The hierarchical structure of the memory?
(Register:: L1 cache:: L2 cache):: L3 cache:: Memory:: Disk
From front to back, the access speed is reduced in turn
From the front to the rear, unit storage unit cost reduced in turn
!: Address space?
Physical Address space: direct vs. hardware, that is, the address space that the hardware can support, such as the memory size supported by the memory bar
Logical address space: a range of addresses owned by a running program, which, in the program's view, is a one-dimensional linear address space
!: Logical Address Generation?
(In a C program, a variable reference is a reference to an address space; In the assembler, the address is thrown in an English marker that can be understood but is closer to the machine language; In machine language, the address is really programming the logical address, it is not possible to see the variable name or whatever)
For C programs, a logical address is generated in the executable after compiling and generating the executable file
The logical address here is stored on the hard disk, the entire generation process is not required to participate in the operating system
!: The generation of physical addresses?
In the CPU there is an area called the MMU (Memory Management Unit), which is used to convert the logical address into a physical address, as follows:
1.CPU Alu (arithmatic and logic unit) to execute an instruction, need to get the content of this address, the CPU will issue instructions from memory to take out this instruction, when taken with the parameter is the logical address of this instruction
The 2.CPU MMU component looks for an existing mapping from its own maintenance mapping table, if there is a direct fetch, if not, go to the memory
3. The translation rules that are located from the logical address to the physical address are maintained by the operating system and are returned to the CPU after this instruction is found in memory
4.CPU take out this instruction, then execute
!: How does the operating system guarantee the relative independence of memory space between running programs?
The operating system sets the base and bounds of the logical address space for each program, and the operating system verifies the logical address when the CPU goes to the memory based on the logical address, and if it exceeds the limit set for the program, it accesses the memory illegally and does not allow
!: Continuous physical memory allocation(using continuous memory allocation has a lot of problems, is the beginning of the design of the computer, the use of the time is not much, the process of the problem may have other solutions, but also may be the other memory allocation mechanism, there is no such or such a problem)
1. Memory Fragmentation Issues
* The address space that cannot be exploited in memory
* Internal fragmentation: Memory not used in allocation unit
* External fragments: Memory not used between allocated units
2. Assigning policies
* First fit: In order to allocate n bytes, use the first (from the zero address down) available free block (this idle faster than n bytes larger) (easy to generate a large number of outer space blocks)
* Optimal adaptation: In order to allocate n bytes, the allocation of free blocks with a minimum of n bytes size is found in the unallocated free block in memory (easy to produce space blocks that are almost unusable)
* Worst fit: In order to allocate n bytes, find a memory block with the largest gap to allocate memory size to allocate (destroy large free blocks so that large partitions cannot be allocated)
(Not here the 3 algorithm which is better, here is just a simple memory allocation, program allocation requires a large block also need a small block, not optimal)
3. Compact and swap-type defragmentation
* Compression type (compact)
* is to put unallocated chunks of memory together by shifting (resetting) the allocated memory blocks
* Questions
1. When to reset
* Cannot be moved when the program is running, can be blocked in the program, wait, hang the time to move
2. Memory reset overhead is not very big.
* The cost of memory copy is still very large.
* Swap type (swap)
* is to seize the waiting program, recycling their memory-to move them from memory to the hard disk, the data is not lost, just moved from memory to the hard disk
1. Choose which program to swap out (for continuous memory allocation, the swap-in unit is a program, not a program fragment)
2. When to swap in and out
3. The cost of swapping in and out
(These problems are solved by using continuous physical memory allocations, and some have not yet found a good solution, this allocation is obsolete)
!: Non-contiguous memory allocation
0. Why use non-contiguous memory allocation
Because the use of continuous memory allocation is always unavoidable to create fragmentation problems that reduce memory utilization, the use of non-contiguous memory allocation can avoid this problem
Advantages of non-contiguous memory allocation:
* Better Memory utilization
* Allow code and data sharing
* Supports dynamic loading and dynamic connection
Disadvantages of non-contiguous memory allocation:
* The disadvantage is the allocation of management itself, so-called management is if the logical address into a physical address
* The solution to using the software is expensive because it is expensive to use software to map every instruction.
* Common hardware solutions are: segmentation and Paging
1. Segmentation
* Segmented addressing scheme (segmented according to the operating characteristics of the application, is meaningful segmentation)
1. The logical address in the program itself is continuous, but we can according to the characteristics of the program to segment the logical address, for example, the logical address is divided into: heap, transport stacks, program data, program TXT (library, user code), and then put these segments into different physical address area (this is the fragment mapping), Operate with permissions in different modes of operation
2. How do I perform a segmented mapping?
1. When the application is written and compiled, the memory is abstracted to a one-dimensional logical address, and when the program is loaded into memory, different segments are mapped to different blocks of physical memory, one for a block of memory
2. Program access to physical memory requires a 2-dimensional two-tuple (segment number, intra-segment offset), the logical address is divided into two parts, that is, the segment number and the offset within the paragraph
3. There are two specific implementations: segment Register + address register (X86), single address
4. How does a logical address translate into a physical address? A logical address consists of two parts: segment number + offset. First, according to the section number to check the paragraph table (by the operating system maintenance, ① logical address segment number and physical address block correspondence, that this segment number corresponds to the memory of the fast start address ② different length of the size of different, this information is put in the paragraph table inside), and then see if the offset address is beyond the limit of the paragraph, memory access The normal word is that the segment table corresponds to the physical address starting address plus the physical address corresponding to the logical address of the offset address
5. Paragraph table before the program run should be established by the operating system, how to build a strong connection with the hardware (know this is all right)
* It is still relatively small, the main or paging mechanism
2. Paging
* Paging addressing scheme (the size of the page is unchanged, and it is meaningless cut)
1. Pagination and segmentation require one page number (segment number) and intra-page offset (intra-segment offset)
2. What if page mapping is done?
1. Paging The logical address space and the physical address space, the size of the paging, is the power of the 2
2. Logical pages and physical pages (frame: page frames): Physical addresses are divided into chunks of memory of the same size, and a physical address consists of two parts: frame number and intra-frame offset
3. How does that logical address translate into a physical address? Physical Address =2 S-*f+o, where: S is the number of bits of the page frame, F is the frame number, and O is offset in the page
4.OK, the page offset and intra-frame offsets are the same, the key is that the logical address of the page number and frame number is not necessarily the same, this requires a page table, the logical page number and the physical page frame of the corresponding relationship
5. The page table is maintained by the operating system, and each running program has a single page table
6. The pages of the logical address are contiguous and may not be contiguous when mapped to physical page frames: not all pages have corresponding page frames at the beginning, after all, the logical address space can be larger than the physical address space
3. Page table (pagetable)
* Page Table Overview
1. Each running program has a page table, and the page table is dynamic, the logical address of the page number to find its corresponding physical address frame number, need two things: Page table base address (stored in the page table base address register, a two-part content, a page table base address, a page table length), page number
2. Each page table is equivalent to a large array, the index is the page number, the value is the frame number
3. Not all page numbers correspond to one page frame number
4. The page table is in memory, so each address has to access two times of memory
1. Page table can be very large, because the more the development of the more available address space, if each page 1024 bytes, a 64-bit machine page table memory is not fit
2. Access efficiency issues: the page table is put into memory, the primary address to access two times of memory
How do we deal with these problems?
1. (Solve the efficiency problem) can be cached, the common page table corresponding relationship cache to the CPU closer to the place, first look for the cache, the cache is no more than the actual page table
2. (Solve space problems) indirect access, you can split the large space into smaller space
* Fast Table (TLB (translation look-aside buffer): Convert fallback buffer in the CPU's MMU unit)
1. The content to be cached is the contents of the page table
2. First find the page frame number in the TLB, if found, go directly to the in-memory address access, if not found, have to check the page table, address and update the page table to the TLB
3. Since the operation of the program has space and time locality, the probability of the page table in the fast table being miss out is about 10%
* Level two, Multi-page table (Sacrifice time for space, sacrifice time can be mitigated by TLB)
* In fact, whether using level two or Multilevel page table, all just split the large page table, reducing the possibility of large page tables occupying a contiguous large memory space, which does not alleviate the large page table occupy the large memory of the scene, can think of avoid large page table occupy large memory strategy is not all of a sudden the application of all the page table loaded into memory , that is, the use of virtual memory technology
* Level two page table
1. The logical address segment is divided into three segments: first-level page table segment + Two-level page table segment + paragraph offset, according to the first-level page table segment to view the first-level page table (stored in the corresponding two-level page table base), according to the two-level page table segment to see the actual corresponding memory page frame number, Then this page frame number plus the corresponding intra-frame (in-page) offset is the actual physical address
* Multi-level page table
1. Two-level page table promotion out of the line, but also the logical address again, and the two-page table is very much like, just replaced by a multi-level page table
* Reverse Page table (entire system with only one page table)
* Do not allow the size of the page table and logical address space to correspond to the physical address space (the logical address space is growing faster than the physical address space)
* The so-called reverse refers to: the usual page table is the page number of the logical address to index the physical address of the page frame number, that is, the page table is associated with the logical address, and the reverse is the page frame number of the physical address to index the logical address of the page number, the page table is only related to the size of the physical Large page tables that occupy a large amount of memory due to the large logical address space
* The problem is also obvious, how to find the corresponding physical address according to the logical address?
There are many implementations in the scheme, but the better scenario is a "hash-based lookup scheme", which is: Define a hash function, enter the page number output as frame number
!: Virtual Memory Technology
1. Causes
Program scale is growing much faster than memory capacity
* To allow more programs to run on limited hardware memory
* To allow larger programs to run on limited hardware memory
* So the part of the hard disk as virtual memory, when the program runs, not the full contents of the program loaded into memory, only the part of the load to use, and so on, no longer loaded from the hard disk
2. Covering technology (before the virtual storage technology is produced)
* If the program is too large to exceed the capacity of the memory, you can use manual override (Overlay) technology, only to save the required instructions and data into memory, where manual refers to the need for programmers to write programs to handle memory management
* According to the program run time function between the call relationship to split the program, there is no call relationship between the separate, of course, there is a program is resident memory, to manage the overlay technology, you can use the function of the call relationship in the form of a tree to express, Then the same level of the tree in the same coverage area (recall the following teacher lectures in the example)
1. Programmers have additional considerations and burdens when writing programs (mostly)
2. Overhead of covering itself
3. Switching technology (before the virtual storage technology is produced)
* If the program is too much, more than the memory capacity, you can use the automatic switching (swapping) technology, the temporary non-running program to external memory, by the operating system, but the cost is high, because each swap in and out is the entire program (the main problem is the size of the exchange is too large: the entire program)
* This technique is used by early Unix
* There are similar problems with swap-like continuous memory allocations:
1. When is it exchanged?
2. Exchange strategy: Who gets in and who goes out?
* The entire process is done by the operating system and is transparent to the programmer
4. Virtual Storage Technology
* Work Status
Based on the principle of the local program, the program does not need to run the full contents of the program into memory, just the program is currently running a portion of the required fragments into memory, the rest remain on the hard disk.
When the program runs, if he asks for access to the page (segment) is already in memory, OK, continue to execute, if not, it will produce a page fault, the OS put the request pages (segment) into memory, if the memory is full, then use a specific page (segment) substitution policy, swap out a portion of the page (segment) out, Load the required pages (segments) into memory and re-execute the interrupted instruction to keep the program running
* Local principle of the program
Time locality: If an instruction is executed, the instruction may be executed again shortly, and if a data is accessed, the data may be accessed shortly thereafter.
Spatial locality: Once a program accesses a storage unit, the nearby storage unit is also accessed shortly thereafter
* Basic Features
1. The memory is getting bigger (plus the hard drive)
2. Partial Exchange
3. Discontinuities in Space itself
* Virtual page-type memory management
1. Most virtual memory management systems use page-based storage management technology. That is, on the basis of page storage management, add the function of request paging and page permutation.
2. Compared with the General page table, the page table entry adds something to meet the virtual memory implementation, such as increasing the dwell bit: indicates that the page is in memory or 1 is 0; protection bit (not 1-bit): read-write protection bit, modify bit: whether the page has been modified; Access bit: Whether the page has been accessed (for the displacement algorithm)
* Local page replacement algorithm
1. Optimal page replacement algorithm (Opt:optimal page Transfer)
Presented by Belady
The pages to be swapped out are permanently unused or will not be used for the longest time in the future.
is a theoretical algorithm because it is not predictable which pages will be called in the future, but can be used to evaluate other page substitution algorithms.
2. FIFO algorithm (Fifo:first in first out)
The page to be swapped out is the first page to go into memory, that is, the page that resides in memory for the longest time is eliminated.
The simplest, but a slightly higher rate of missing pages, because this permutation is meaningless, is entirely time-based, and does not take into account the dynamic characteristics of the program runtime
3. Last unused algorithm (Lru,least recently used)
The page to be swapped out is the least recently used page. For each page in memory, you set an Access field that records the time t that has passed since the last visit, and when you want to retire a page, choose T to eliminate the largest page.
Although the algorithm is good, but the support of the algorithm requires a lot of hardware support, it is more cumbersome to achieve, high cost. The approximate LRU algorithm is used in practical application, and the clock algorithm is an approximate LRU algorithm.
4. Clock page replacement algorithm
The clock here is not polling, and the second is looking for a replacement page like a clock
First for each page to add an access bit, and then all the pages through the link pointer into a circular queue, when a page is accessed, the access bit of the page is set to 1 (hardware completed, of course, can also be implemented in software); The permutation algorithm uses FIFO in this loop queue to select the page to be swapped out. If the access bit of the page is 0, swap it out directly, if it is 1, set to 0 and then go down
For the clock page replacement algorithm, there is an enhanced algorithm that is enhanced clock page replacement algorithm (also known as the Second Chance method)
This allows dirty pages to always be left in a single clock scan, and the dirty bit (modifier bit) bit is used on the basis of the clock page permutation, and its substitution strategy is:

Why make dirty pages first, for non-dirty pages (not modified pages) directly released on the line, do not write back to the hard disk, for dirty pages, also write hard disk, high cost
5. Least commonly used algorithm (Lfu:least frequently used)
For each page to record the frequency of access for a period of time, to swap out the most frequently accessed pages, you can use the same set of hardware as LRU to achieve LFU, the same realization of high cost.
A lot of problems, first use the least used to eliminate the page itself is a bit of a problem
6.Belady phenomenon
The phenomenon that FIFO algorithm appears
The operating system allocates the available physical memory for each program is often less than the logical address space required for the program to run, using virtual memory technology to run the program, in the process of running the program will certainly produce a fault, usually, the larger the amount of available physical memory allocated to the program, the lower the fault rate should be, And the Belady phenomenon refers to some of the page replacement algorithm, the allocated physical memory is large, the fault rate also become larger phenomenon
7. Jitter Behavior
Cause: Because there are too many processes running in the system, which causes the process to be used for the page swap/swap for most of the time while running, and can not do some actual work, resulting in a sharp drop in CPU utilization to very low and tend to 0 of the situation, this condition is called "jitter"
8.LRU, FIFO and clock comparisons
LRU and FIFO are essentially first-in-one strategy, but LRU considers the dynamic running characteristics of the program, the access to the program to the page, removed from the stack to the top of the stack, and FIFO is not in accordance with the program's running dynamic adjustment
And the clock algorithm is an approximation to LRU.
If the program does not have a good local access feature, both clock and LRU may degenerate into FIFO algorithms
* Global page replacement algorithm
Working Set Model
(Background: According to the principle of local operation of the program, the program during the operation of its access to the page is not uniform, different periods of time may be concentrated in a few different pages)
The so-called working set: Refers to the set of pages that the process is actually accessing during a certain interval of time.
Working Set page substitution algorithm
Although the program requires a small number of pages to be able to run, but in order to produce less page faults, the entire working set of the program into memory, however, we do not know the future will access to the working set is exactly what pages, only according to the recent time behavior as the future behavior approximation
This is because the Working set window pans backwards with the run time, OK, the pages that are not in the working set after panning are going to be swapped out, but not because the memory is not enough, but because it is not in the working set, memory may be enough
Algorithm for the replacement of the missing pages rate
Missing pages rate = number of pages/visits
Increase the length of the working set when the fault rate is high, reduce the length of the working set when the missing pages are low
OK, how to increase how to reduce it?
1. First set the size of an initial window
2. The missing pages rate is the value of an interval
3. You can set such a value, if the number of two page faults in the middle access more than this value, indicating a lower fault rate, this time to reduce the length of the working set, the reduction of the page is not accessed in this interval of the page, if the interval of two interrupts is less than this value, indicating a slightly higher fault rate, to expand, Add the missing page to the current work set

Operating system Learning-operating system introduction and memory management

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.