Self-cultivation of programmers--operating system

Source: Internet
Author: User
Tags posix least privilege

Ext.: http://kb.cnblogs.com/page/211181/

Perhaps, just this article, you can give you a comprehensive understanding of the operating system!

Before reading this article, it is recommended to read "Make your own 4-bit computer".

Directory:

1. What is the status of the process, the state transition diagram, and the event that caused the conversion.

2. The difference between a process and a thread.

3. Process communication in several ways.

4. Thread synchronization in several ways.

5. How threads are implemented. (The difference between a user thread and a kernel thread)

6. The difference between the user state and the nuclear mentality.

7. The difference between the user stack and the kernel stack.

8. Memory pool, process pool, thread pool.

9. The concept of the deadlock, which leads to the cause of the deadlock, leads to the four necessary conditions of the deadlock, the handling of the four ways of the deadlock, the method of preventing the deadlock, the method of avoiding the deadlock.

10. Process scheduling algorithm.

One. How Windows Memory Management is (block, page, segment, paragraph-page).

12. Memory continuous allocation method adopted several algorithms and their merits and demerits.

13. Dynamic link and static link.

14. Basic paging, request paging storage management mode.

15. Basic segmentation, request segmented storage management mode.

16. The comparison of the segmented paging method has its advantages and disadvantages.

17. Several page replacement algorithms will count the number of pages you need to change. (How is the LRU program implemented?) )

18. How to define and implement virtual memory.

19. Four features of the operating system.

DMA.

Spooling.

22. External memory distribution of several ways, and various advantages and disadvantages.

The operating system is a computer program that manages computer hardware and software resources and is also the core and cornerstone of a computer system. The operating system needs to handle basic transactions such as managing and configuring memory, prioritizing system resource supply and demand, controlling input and output devices, operating networks, and managing file systems. The operating system also provides an operating interface that allows users to interact with the system.

A computer program that runs on an operating system is typically composed of one or a set of processes. Therefore, this article starts from the process to start speaking!

1. What is the status of the process, the state transition diagram, and the event that caused the conversion.

As shown, the process consists of three states: Ready state, run state, and blocking state. Detailed description is as follows:

Note: Creating and exiting is not a process state. Blocking is also called waiting, waiting, and readiness differences: Waiting is waiting for a resource other than the CPU, and ready to wait for CPU resources.

1) Ready-execute: The process of the ready state, when the process Scheduler selects a ready process from a selected policy, and assigns the processor to it, the process is changed from the ready state to the execution state;

2) Execution-wait: The executing process is unable to execute due to a wait event, the process is changed from execution state to wait state, as the process makes an input/output request to wait for the external device to transfer information status, the process request resource (main memory space or external device) is not satisfied when the status of waiting resources, The process is running in a failure (program error or main memory read-write error, etc.) into the waiting state of intervention and so on;

3) Wait-ready: Waiting for the process, in its waiting for the event has occurred, such as input/output completed, the resource is satisfied or error processing is completed, the waiting process is not immediately into the execution state, but first into the ready state, And then by the system process Scheduler at the appropriate time to turn the process into a state of execution;

4) Execution-ready: The process is executing, because the time slice runs out and is suspended execution, or in a system with preemptive priority scheduling algorithm, when a higher priority process to run and forced to give up the processor, the process from the execution state to ready state.

2. The difference between a process and a thread.

See the articles you shared before your quick lesson:

Graphics description of process and thread

The difference between a process and a thread

3. Process communication in several ways.

As an example of a Linux operating system (window is similar), the process of communication between Linux is as follows:

1 pipes and well-known pipelines (namedpipe): Pipelines can be used for communication between affinity processes, and well-known pipelines overcome the limitations of pipe without name, so that, in addition to having the functions of a pipe, it allows communication between unrelated processes;

2 Signal (Signal): signal is a more complex mode of communication, used to inform the receiving process of an event occurred, in addition to inter-process communication, the process can also send signals to the process itself; Linux in addition to supporting the UNIX early signal semantic function Sigal outside, Also support the semantics of the POSIX.1 standard signal function sigaction (in fact, the function is based on BSD, BSD in order to achieve a reliable signal mechanism, but also able to unify the external interface, with sigaction function to re-implement the signal function);

3 Message Queue (Message Queuing): Message Queuing is a linked table of messages, including POSIX Message Queuing SYSTEMV Message Queuing. A process with sufficient permissions can add messages to the queue, and a process that is given Read permission can read the messages in the queue. Message queue overcomes the disadvantage that the signal carrying information is low, the pipeline can only carry the unformatted byte stream and the buffer size is limited.

4 Shared Memory: Allows multiple processes to access the same piece of memory space, is the fastest available IPC form. is designed for inefficient operation of other communication mechanisms. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and mutual exclusion between processes.

5 Semaphore (semaphore): primarily as a means of synchronization between processes and between different threads of the same process.

6 Sockets: A more general inter-process communication mechanism that can be used for inter-process communication between different machines. Originally developed by the BSD branch of the UNIX system, it is now generally possible to migrate to other Unix-like systems: Both Linux and SYSTEMV variants support sockets.

4. Thread synchronization in several ways.

There are four main ways of thread synchronization: critical section (CriticalSection), mutex (mutex), Semaphore (Semaphore), event.

Their main differences and characteristics are as follows:

1) critical section: Through the serialization of multithreading to access public resources or a piece of code, fast, suitable for controlling data access. Only one thread is allowed to access the shared resource at any time, and if more than one thread attempts to access the public resource, the other threads that attempt to access the public resource will be suspended after one thread enters, and wait until the thread that enters the critical section leaves and the critical section is freed before other threads can preempt it.

2) Mutex: Adopt mutually exclusive object mechanism. Only the thread that owns the mutex has access to the public resource, because there is only one mutex object, so that the public resources are not accessed by multiple threads at the same time. Mutual exclusion can not only realize the common resources security sharing of the same application, but also can realize the security sharing of common resources of different applications.

3) Semaphore: It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access this resource at the same time.

4) Event: To maintain thread synchronization by notifying the operation, it is also convenient to implement a priority comparison of multiple threads.

5. How threads are implemented. (In other words, the difference between a user thread and a kernel thread)

Thread implementations can be divided into two categories: User-level threads (user-levelthread) and kernel thread threads (kernel-levelthread), which are also known as kernel-supported threads or lightweight processes. In multi-threaded operating system, the implementation of each system is not the same, in some systems to implement the user-level threads, and some systems to implement the kernel-level threads.

A user thread is a thread implemented in a user program that does not require kernel support and is not dependent on the operating system core, and the application process uses line libraries to provide functions to create, synchronize, dispatch, and manage threads to control the user thread. No need for user-state/kernel-mindset switching, fast, operating system kernel does not know the existence of multithreading, so a thread blocking will make the entire process (including all its threads) blocked. Because the processor time slice allocation here is in the process as the basic unit, each thread executes relatively less time.

Kernel thread: Created and revoked by the operating system kernel. The kernel maintains the context information of processes and threads, as well as thread switching. A kernel thread is blocked due to I/O operations and does not affect the operation of other threads.

The following are the differences between a user-level thread and a kernel-level thread:

1) kernel support threads are perceived by the OS kernel, while user-level threads are not perceived by the OS kernel.

2) User-level thread creation, undo, and Dispatch does not require the support of the OS kernel, which is processed at the level of language (such as Java), while kernel support for thread creation, revocation, and dispatch requires the OS kernel to provide support, and is largely the same as process creation, revocation, and scheduling.

3) When a user-level thread executes a system call instruction, it causes its owning process to be interrupted, while the kernel support thread executes the system call instruction, which causes the thread to be interrupted only.

4) in a system that has only a user-level thread, the CPU is dispatched in a process-based process, with multiple threads in the running state, and the user program controls the rotation of the thread, and in a system with kernel support threads, the CPU is dispatched in threads, and the thread scheduler of the OS is responsible for thread scheduling.

5) The program entity of the user-level thread is a program running in the user state, while the kernel-supported program entity is a program that can run in any state.

6. The difference between the user state and the nuclear mentality.

Before we tell the difference between a user's state and a nuclear mindset, let's talk about the concept of "privileged level."

People familiar with the unix/linux system know that when we create a child process, we do so by calling the fork function. In fact, the work of fork actually completes the process creation function in the way of system invocation, and the specific work is implemented by Sys_fork. For any operating system, the creation of a new process is a core function, because it has to do a lot of work at the bottom, consuming the physical resources of the system, such as allocating physical memory, copying the relevant information from the parent process, copying the Settings page table of contents and so on, these obviously can not be arbitrarily let which program can do, This naturally leads to the concept of privileged levels, and it is clear that the most critical power must be enforced by highly privileged programs, so that centralized management can reduce access and usage conflicts for limited resources.

Privilege level is obviously a very effective means of management and control program execution, so the hardware on the privilege level to do a lot of support, on the Intelx86 architecture of the CPU has 0~3 four privileged, 0 highest, 3 lowest, the hardware on each instruction will be executed by the command of the privilege level of the corresponding check, The associated concepts are CPL, DPL, and RPL, which are no longer too much elaborated here. Hardware has provided a set of privilege-level use of the relevant mechanisms, software is a good use of the problem, which is the operating system to do, for Unix/linux, only use 0 level privilege and 3 privilege level. That is to say, in the unix/linux system, a command that works at level 0 privilege has the highest power available to the CPU, while an instruction at level 3 privilege has the lowest or most basic power provided by the CPU.

OK, with the understanding of the concept of "privileged level", we can understand the difference between user state and nuclear mentality more intuitively. The kernel state and the user state is the operating system two levels of operation, when the program runs at level 3 privilege level, it can be called as running in the user state, because this is the least privilege level, is the normal user process to run the privilege level, most users directly face the program is running in the user state, conversely, when the program is running at level 0 , it can be called running in the kernel state. Programs running in the user state do not have direct access to the operating system kernel data structures and programs. When we execute a program in the system, most of the time is run in the user state, when it needs the operating system to help complete some of the work it does not have the power and ability to complete the operation will switch to the kernel state. Typically, the following three scenarios cause a switchover of the user state to the kernel State:

1) System call

This is a way for the user-state process to switch to the kernel state actively, the user-state process through the system call request using the operating system provided by the service program to complete the work, such as in the preceding example, fork () is actually executed a new process to create a system call. The core of the system call mechanism is to use an interrupt that the operating system is particularly open to the user, such as Linux int80h interrupts.

2) exception

When the CPU executes a program running in the user state, some pre-unknown exception occurs, which triggers the current running process to switch to the kernel-related program that handles this exception, and then goes to the kernel state, such as a page fault.

3) Interruption of peripheral devices

When the peripheral device completes the user requested operation, the CPU is signaled to the corresponding interrupt, then the CPU will suspend execution of the next instruction to be executed to execute the handler corresponding to the interrupt signal, if the previously executed instruction is a user-state program, Then the process of this conversion will naturally occur from the user state to the kernel state switch. For example, the disk read and write operation is completed, the system will switch to the hard disk read and write interrupt handler to perform subsequent operations.

These 3 methods are the most important way for the system to go to the kernel state from the user state at runtime, where the system call can be thought to be initiated by the user process, and the exception and the peripheral device interrupt are passive.

7. The difference between the user stack and the kernel stack.

Operating system, each process will have two stacks, a user stack, exists in the user space, a kernel stack, exists in the kernel space. When the process runs in user space, the contents of the CPU stack pointer register are the user stack address, the user stack is used, and when the process is in kernel space, the contents of the CPU stack pointer register are the kernel stack space address, using the kernel stack.

The kernel stack is an area of memory that is part of the operating system space, and its main uses are:

1) Save interrupt site, for nested interrupts, the field information of the interrupted program is pressed into the system stack, and the reverse is ejected when the interrupt returns;

2) Save local variables of the parameters, return values, return points, and subroutines (functions) that are called between the operating system sub-programs.

The user stack is an area in the user process space that holds the parameters, return values, return points, and local variables of subroutines (functions) that are called among the subroutines of the user process.

PS: So why not use a stack directly, why waste so much space?

1) If only the system stack is used. System stack general size is limited, if the interrupt has 16 priority, then the system stack general size is 15 (just save 15 low-priority interrupts, another high-priority interrupt handler is running), but the number of user program subroutine calls may be many, so 15 subroutine calls after the subroutine call parameters, The local variables of the return value, the return point, and the subroutine (function) cannot be saved and the user program will not function properly.

2) If only the user stack is used. We know that the system program needs to run under some kind of protection, and the user stack is protected when the user space (i.e. the CPU is in the user state while the CPU is in the kernel mentality) and cannot provide the appropriate protection (or quite difficult).

8. Memory pool, process pool, thread pool.

First, a concept of "pooling technology" is introduced. Pooling technology word is: to save a large number of resources in advance for a rainy time and re-use. Pooling technology is widely used, such as memory pool, thread pool, connection pool and so on. Memory pool related content, it is recommended to look at Apache, Nginx and other open source Web server memory pool implementation.

Because in practical applications, allocating memory, creating processes, and threads are designed to make some system calls, system calls need to cause the program to switch from the user state to the kernel state, which is a very time-consuming operation. Therefore, the memory pool, process pool, thread pool technology are usually used to improve the performance of the program when it requires frequent memory request releases, processes, thread-creation destruction, and so on.

Thread pool: The thread pool principle is very simple, similar to the concept of buffer in the operating system, its flow is as follows: Start a number of threads, and let these threads are asleep, when a need to open a thread to do specific work, it will wake up a thread pool of a sleep thread, let it do the specific work, When the work is complete, the thread is in a sleep state instead of destroying the thread.

The process pool is the same as the thread pool.

Memory pool: Memory pool refers to the program pre-application from the operating system a large enough memory, and then, when the program needs to apply for memory, not directly to the operating system, but directly from the memory pool, and, similarly, when the program frees memory, does not really return memory to the operating system, but return to the memory pool. When the program exits (or a specific time), the memory pool frees the real memory that was previously requested.

9. The concept of the deadlock, which leads to the cause of the deadlock, which leads to the four necessary conditions of the deadlock, the method of preventing the deadlock, the method of avoiding the deadlock

In computer systems, if the system's resource allocation policy is inappropriate, it may be more common that programmers write programs that have errors and so on, which can lead to a deadlock in the process due to competing resources.

The main causes of deadlocks are:

1) Due to insufficient system resources.

2) The sequence in which the process runs is not appropriate.

3) Improper allocation of resources and so on.

If the system has sufficient resources, the resource requests of the process can be met, the likelihood of deadlocks is very low, otherwise it will be locked into a deadlock because of the contention for limited resources. Second, the process is run in a different order and speed, and may also produce a deadlock.

The four necessary conditions for creating a deadlock:

1) Mutex condition: A resource can only be used by one process at a time.

2) Request and hold condition: When a process is blocked by requesting a resource, it remains in place for the resources that have been obtained.

3) Conditions of deprivation: the resources acquired by the process cannot be forcibly deprived until the end of use.

4) Cyclic wait condition: a cyclic waiting resource relationship is formed between several processes.

These four conditions are necessary for the deadlock, as long as the system has a deadlock, these conditions must be established, and as long as one of the above conditions is not satisfied, there will be no deadlock.

Release and prevention of deadlocks:

Understanding the causes of deadlocks, especially the four necessary conditions that generate deadlocks, can prevent, prevent, and unlock deadlocks to the maximum possible. Therefore, in the system design, process scheduling and other aspects of how to not let these four necessary conditions to set up, how to determine the rational allocation of resources algorithm, to avoid the process of permanent occupation of system resources.

In addition, to prevent the process in the waiting state to occupy resources, during the system operation, each system issued by the process can be satisfied with the resource request for dynamic check, and based on the results of the check to determine whether to allocate resources, if the system can deadlock after the allocation, it is not allocated, otherwise allocated. Therefore, the allocation of resources should be given reasonable planning.

10. Process scheduling algorithm.

Several process scheduling algorithms:

First, first come service and short job (process) Priority scheduling algorithm

1. First come first service scheduling algorithm. First come first service (FCFS) scheduling algorithm is the simplest scheduling algorithm, which can be used for both job scheduling and process scheduling. The FCFS algorithm is advantageous for long jobs (processes) and is not conducive to short jobs (processes).  This algorithm is suitable for the CPU busy job, but it is not conducive to the I/O busy type of job (process). 2. Short job (process) priority scheduling algorithm. The short job (process) Priority scheduling algorithm (SJ/PF) refers to the algorithm for scheduling short or short process priorities, which can be used for job scheduling and process scheduling. However, it is unfavorable to the long-term operation; The urgency of the operation (process) is not handled in time; the length of the job is only estimated.

Second, high priority priority scheduling algorithm

1. The type of priority scheduling algorithm. The highest priority priority (FPF) scheduling algorithm is introduced in order to take care of the urgency operation and get it into the system. This algorithm is often used in batch processing system, as a job scheduling algorithm, also as a variety of operating system process scheduling, can also be used in real-time systems. When it is used for job scheduling, a number of the highest priority jobs in the fallback queue are loaded into memory. When it is used for process scheduling, the processor is assigned to the highest priority process in the ready queue, at which point the algorithm can be further divided into the following two kinds:

1) Non-preemptive priority algorithm

2) Preemptive priority scheduling algorithm (high-performance computer operating system)

2. Priority type

The core of the highest priority scheduling algorithm is whether it uses static or dynamic precedence, and how to determine the priority of a process.

3. High response ratio priority scheduling algorithm

In order to compensate for the shortage of the short job priority algorithm, we introduce dynamic priority, which increases the priority level of the job with the increase of the waiting time and the rate a. The law of priority change can be described as: priority = (wait time + request service time)/request service time; = (response time)/Request service time

Three, time-slice-based rotation scheduling algorithm

1. Time slice rotation method. The time slice rotation method is generally used for process scheduling, each scheduling, the CPU allocation of the first process, and make it a time slice execution. When the time slice is run out, a clock interrupt request is made by a timer, and the process is stopped and sent to the end of the ready queue;

2. Multilevel feedback queue scheduling algorithm multilevel feedback queue scheduling algorithm without knowing the time required to execute various processes in advance, it is a well-known process scheduling algorithm. The implementation process is as follows:

1) Set up multiple ready queues and assign different priorities to each queue. The higher the priority queue, the smaller the execution time slice required for each process.

2) When a new process enters memory, it is first placed at the end of the first queue and queued for dispatch by the FCFS principle. If he can be completed in a time slice, he can evacuate; if not, go to the end of the second queue and wait for the same schedule ... So, when a long job (process) moves from the first queue to the nth queue (the last queue), it runs on the nth queue time slice.

3) only when the first queue is idle, the scheduler dispatches the processes in the second queue, and only if the 1th to (i-1) queue is empty, the process in queue I is scheduled to run and the corresponding time slice rotation is performed.

4) If the processor is processing a process in queue I and a new process enters a higher priority queue, the new queue seizes the running processor and places the running process at the end of queue I.

11.Windows Memory Management (block, page, segment, Segment page).

Block-type Management

The main memory is divided into a large chunk, when the required program fragment is not in main memory to allocate a piece of main memory space, the program fragment loaded into main memory, even if the required program size only a few bytes can only be assigned to it. This can cause a lot of waste, but it's easy to manage.

Page-style management

The main memory is divided into one page, each page of space is much smaller than a piece of space, it is obvious that the space utilization of this method is much higher than the block-type management.

Segment Type

The main memory is divided into a paragraph, each section of space is much smaller than a page-by-page space, this method in the space utilization is much higher than page management, but there is another disadvantage. A program fragment may be divided into dozens of segments, so that a lot of time will be wasted on computing the physical address of each segment (the computer is the most time-consuming people know that I/O bar).

Section-page management. (Used now)

Combines the advantages of segment management and page management. The main memory is divided into several pages, and each page is divided into several paragraphs.

12. Memory continuous allocation method adopted several algorithms and their merits and demerits.

There are four ways to allocate memory continuously: Single continuous allocation, fixed partition allocation, dynamic partition allocation, and dynamic relocation partition allocation.

Single continuous allocation: can only be used in single-user, single-tasking operating systems.

Fixed partition allocation: can run multi-channel program storage management mode.

Dynamic partition allocation: Allocates memory space dynamically based on the actual needs of the process.

Can relocate partition allocation: A system or user program must be loaded into a contiguous memory space.

13. Dynamic link and static link.

Static link is to compile the link directly to the required execution code copy to the call, the advantage is that when the program is published, it is unnecessary to rely on the library, that is no longer need to take the library a piece of publishing, the program can be executed independently, but the volume may be relatively large.

Dynamic linking is not directly copying executable code at compile time, but by recording a series of symbols and parameters, passing it to the operating system when the program is running or loading, the operating system is responsible for loading the required dynamic library into memory, and then the program runs to the specified code. To share execution of the dynamic library executable code that has already been loaded in memory, eventually achieving the purpose of the run-time connection. The advantage is that multiple programs can share the same piece of code without having to store multiple copies on disk, with the disadvantage of being loaded at run time, which can affect the performance of the program in the early execution.

14. Basic paging, request paging storage management mode.

The basic paging storage management method has the following characteristics:

1) Disposable. Requires that all jobs be loaded into memory before they can be run. Many jobs are not used by all of their programs and data at each run. If you load all of its programs at once, it can cause a waste of memory space.

2) Residency. Once the job has been loaded into memory, it resides in memory until the job is finished running. Although running processes can wait long time for I/O, or some program modules do not need to run once (run), they will still continue to consume valuable memory resources.

Request Paging storage Management is a common way to implement virtual memory, which is implemented on the basis of basic paging storage management. The basic idea is: Before the process begins to run, only the part of the page that is currently executed to run, in the process of execution, you can use the request call into the interrupt dynamic mount to access but not the memory of the page, when the memory space is full, and need to load a new page, according to the displacement function appropriate to call a page, In order to make room for new pages. In order to achieve the request paging, a certain amount of hardware support, including: page table mechanism, fault mechanism, address transformation mechanism.

15. Basic segmentation, request segmented storage management mode. (slightly) 16. The comparison of the segmented paging method has its advantages and disadvantages.

Segmentation and paging are actually a way of dividing or mapping addresses. The difference between the two main points are as follows:

1) The page is the physical unit of information, paging is for the realization of discrete distribution, to reduce the amount of memory, increase the utilization of memory, or paging is simply due to the needs of the system management, rather than the user's needs (also transparent to the user). A segment is a logical unit of information that contains a set of information that is relatively complete (such as data segments, code snippets, stack segments, and so on). The purpose of segmentation is to better meet the needs of users (users are also available).

2) The size of the page is fixed and determined by the system, the logical address is divided into page number and in-page address two parts, is implemented by the machine hardware, so that a system can only have one size of the page. The length of the segment is not fixed, it is decided by the user to write the program, usually by the editor in the source program editing, according to the nature of the information to be divided.

3) Paging Job address space is Koriichi, that is, a single linear space, programmers only need to use a memory (linear address of the 16 binary representation), can represent an address. The job address space of the segment is two-dimensional, and when the programmer identifies an address, it needs to give the segment name (such as data segment, code snippet, stack segment, etc.) and the address within the segment.

4) Both pages and paragraphs have a storage protection mechanism. But access permissions are different: The segment has read, write and execute three kinds of permissions, while the page only read and write two permissions.

17. Several page substitution algorithms.

1) Best replacement algorithm (OPT) (Ideal replacement algorithm)

This is an ideal page replacement algorithm, but it is virtually impossible to implement. The basic idea of this algorithm is that when a page fault occurs, some pages are in memory, and one of them will be accessed very quickly (also containing the page of the next instruction immediately followed), while the other pages may be accessed after 10, 100, or 1000 instructions. Each page can be tagged with the number of instructions to be executed before the page is first accessed. The best page replacement algorithm simply stipulates that the most marked pages should be replaced. The only problem with this algorithm is that it cannot be implemented. When a page fault occurs, the operating system cannot know when the next time the individual pages are accessed. Although this algorithm is not possible, the best page replacement algorithm can be used to measure the performance of an achievable algorithm.

2) Advanced first-out permutation algorithm (FIFO)

The simplest page substitution algorithm is the first in, first Out (FIFO) method. The essence of this algorithm is to always choose to stay in main memory for the longest (that is, the oldest) of a page permutation, that is, the first page to enter the RAM, first exit memory. The reason for this is that the first page to be paged into memory is more likely to be used than it was when it was just transferred into memory. Create a FIFO queue that hosts all the pages in memory. The displaced page is always on the queue header. When a page is put into memory, it is inserted at the end of the queue.

This algorithm is ideal only when the address space is accessed in a linear order, otherwise it is inefficient. Because those pages are often visited, often in main memory also stay the longest, the result they become "old" and have to be replaced.

Another drawback of FIFO is that it has an anomaly, that is, increasing the memory block, instead of increasing the fault rate of the pages. Of course, the page trend that causes this anomaly is actually very rare.

3) The most recent unused (LRU) algorithm

The main difference between the FIFO algorithm and the OPT algorithm is that the FIFO algorithm uses the time of the page to enter memory as the replacement basis, and the OPT algorithm is based on the future use of the page time. If the recent past is an approximation of the near future, you can replace pages that have not been used for the longest time in the past. The essence of this is that when you need to replace a page, you choose to replace the pages you haven't used in the most recent time. This algorithm is called the longest unused algorithm (LEASTRECENTLYUSED,LRU). The LRU algorithm is related to the last time each page was used. When a page has to be replaced, the LRU algorithm chooses which pages have been unused for the longest time in the past.

18. How to define and implement virtual memory.

Virtual memory is a technology of computer system memory management. It allows the application to assume that it has contiguous available memory (a contiguous, complete address space), and in fact, it is usually separated into multiple physical memory fragments, and some are temporarily stored on external disk storage and exchanged for data when needed. Systems that use this technology make it easier to write large programs than systems that do not use virtual memory technology, and are more efficient at using real physical memory, such as RAM.

19. Four features of the operating system.

1) concurrency (concurrence)

The two concepts of parallelism and concurrency are two concepts that are both similar and different in nature. Parallelism refers to the occurrence of two or more events at the same time, which is a microscopic concept, that is, physically these events occur simultaneously, while concurrency means that two or more events occur at the same time interval, which is a more macroscopic concept. In a multi-channel program environment, concurrency refers to a period of time a multi-channel program at the same time, but in a single processor system, each moment can only execute a program, so micro-these programs are alternately executed. It should be noted that the usual procedures are static entities, which cannot be executed concurrently. In order for the program to execute concurrently, the system must establish a process for each program separately. A process, also called a task, is simply a basic unit that can run independently in a system and is allocated as a resource, and it is an active entity. Information can be executed and exchanged concurrently between multiple processes. A process requires a certain amount of resources, such as CPU, storage space, and I/O devices, to run at runtime. The purpose of introducing a process into the operating system is to enable the program to execute concurrently.

2) sharing (sharing)

The so-called sharing means that the resources in the system can be used by multiple concurrent execution processes in memory. Because of the different properties of resources, multiple processes share different ways of sharing resources, which can be divided into: mutual exclusion and simultaneous access.

3) virtual (Vsan)

Refers to a physical entity that becomes a logical counterpart through technology. In the operating system virtual implementation is mainly through the use of time-sharing method. Obviously, if n is the number of virtual logical devices corresponding to a physical device, the speed of the virtual device must be the 1/n of the physical device speed.

4) Asynchronous (Asynchronism)

In a multi-channel program design environment, allowing multiple processes to execute concurrently, due to the constraints of resources and other factors, usually, the process is not "one go", but the "walk-stop" way to run. It is unpredictable when each process in memory is executed, when it is paused, how it is pushed forward, and how much time each program takes to complete. Or, the process is running in a one-step way. However, as long as the running environment is the same, the job runs multiple times, resulting in exactly the same results.

20.DMA

Direct Memory Access (DIRECTMEMORYACCESS,DMA) is a kind of memory access technology in computer science. It allows the hardware subsystem (computer peripherals) inside certain computers to read and write the system memory independently, without having to bypass the central processing unit (CPU). DMA is a fast way of data transfer with the same level of processor burden. Many hardware systems use DMA, including hard disk controllers, graphics cards, network cards, and sound cards.

21.Spooling

spooling (i.e., external device online parallel operation), the abbreviation for Simultaneousperipheraloperationon-line, is a technique for how slow character devices exchange information with computer hosts, often referred to as "spool technology."

22. External memory distribution of several ways, and their merits and demerits.

1) Continuous distribution

Continuous allocation: When a file is created, a contiguous set of blocks is allocated, and each file in fat has one entry, indicating the length of the starting block and file. advantageous for sequential files.

Advantages:

Simple. Applies to one-time write operations

Sequential access and random access support, fast sequential access

The required number of disk seek and seek time is minimal (because of the continuity of space, when accessing the next disk block, there is generally no need to move the head, when the need to move the head, only need to move a track.

Disadvantages:

File does not grow dynamically (possibly an empty block at the end of the file has been assigned to another file)

Not conducive to file insertions and deletions

External fragmentation issues (after repeated additions and deletions of files) make it difficult to find contiguous blocks of sufficient space size. To tighten

Declares the size of the file when it is created.

2) chain-type distribution

Chained allocation: The information for a file is stored in a number of discontinuous physical blocks, connected by pointers between blocks, and the previous physical block points to the next physical block. Each file in fat also requires only one item, including the file name, the starting block number, and the last block number. Any free block can be added to the chain.

Advantages:

Increased disk space utilization with no external fragmentation issues

Facilitates file insertion and deletion

Facilitates file dynamic expansion

Disadvantages:

Slow access, generally only suitable for sequential access to information, not suitable for random access: Find a block must start from scratch along the pointer.

Reliability issues such as pointer errors, more seek times, and seek time

The link pointer takes up a certain amount of space, and multiple blocks are clustered (cluster), allocated by cluster rather than by block (increased disk fragmentation).

3) Index allocation

Index allocation: Each file has a one-level index in fat, and the index contains the entry for each partition assigned to the file. The index of the file is saved in a separate block. The entry for the file in fat points to this piece.

Advantages:

The advantages of the link structure are maintained, and the disadvantage is solved: by block allocation, external fragmentation can be eliminated, and by varying the size of the partition allocation can improve locality. Index allocation supports sequential access to files and direct access to files, which is a common way to use.

Satisfies the requirement of dynamic file growth and insertion and deletion (as long as there are free blocks)

can also make full use of external memory space

Disadvantages:

More seek times and seek time.

The index table itself brings system overhead, such as: internal and external storage space, access time

Self-cultivation of programmers--operating system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.