Programmer self-cultivation-Operating System)

Source: Internet
Author: User

Perhaps, this article will give you a comprehensive understanding of the operating system!

Before reading this article, we recommend that you read "create four computers by yourself ".

Directory:

1. What kind of States are involved in the process, the status transition diagram, and the events that cause the conversion.

2. Differences between processes and threads.

3. process communication methods.

4. Several Methods of thread synchronization.

5. Thread implementation method (difference between user thread and kernel thread)

6. Differences between the user State and the core state.

7. Differences between user stack and kernel stack.

8. Memory Pool, process pool, and thread pool.

9. The concept of deadlock, the cause of the deadlock, the four conditions that lead to the deadlock, the four ways to deal with the deadlock, the method to prevent the deadlock, the method to avoid the deadlock.

10. process scheduling algorithm.

11. Windows Memory Management (Block, page, segment, and segment page ).

12. Several Algorithms Used for memory continuous allocation and their respective advantages and disadvantages.

13. Dynamic and Static links.

14. Basic paging and request paging storage management methods.

15. Basic Segmentation and request segmentation storage management methods.

16. Compare the advantages and disadvantages of the multipart paging mode.

17. Several Page Replacement algorithms calculate the number of pages to change. (How is the LRU application implemented ?)

18. Virtual Memory definition and implementation method.

19. Four features of the operating system.

20. DMA.

21. spooling.

22. Storage allocation methods and advantages and disadvantages.

The operating system is a computer program used to manage computer hardware and software resources. It is also the kernel and cornerstone of the computer system. The operating system needs to handle basic transactions, such as managing and configuring memory, prioritizing supply and demand of system resources, controlling input and output devices, operating networks, and managing file systems. The operating system also provides an operation interface for users to interact with the system.

A computer program running on an operating system usually consists of one or more processes. Therefore, this article begins with the process!

1. What kind of States are involved in the process, the status transition diagram, and the events that cause the conversion.

As shown in, the process includes three states: readiness, running, and blocking. Details are as follows:

Note: The process is not created or exited. Blocking is also called waiting. The difference between waiting and ready is that waiting is waiting for resources other than CPU, while waiting for ready is a CPU resource.

1) Ready-run: For a ready process, after the process scheduler selects a ready process based on a selected policy and assigns a processor to it, the process changes from ready to running;

2) execution-waiting: If a running process cannot be executed due to a waiting event, the process changes from execution status to waiting status, if a process initiates an input/output request and changes to the status of waiting for external devices to transmit information, the requested resources (primary storage space or external devices) become waiting for resources when they fail to meet the requirements, A process running failure (program error or primary memory read/write error) becomes a waiting for intervention status;

3) wait-ready: a process in the waiting state. When the waiting event has occurred, such as the input/output is complete, the resource is satisfied or the error is handled, A waiting process is not immediately transferred to the execution status. Instead, it is first transferred to the ready state, and then the system process scheduler changes the process to the execution status when appropriate;

4) execution-ready: the process being executed is paused because the time slice is used up, or in a system that adopts the preemptive priority scheduling algorithm, when a process with a higher priority needs to run and is forced to exit the processor, the process changes from execution to readiness.

2. Differences between processes and threads.

For more information, see the articles shared before the quick course:

Process and thread graphic description

Differences between processes and threads

3. process communication methods.

Taking the Linux operating system as an example (Windows is similar), the communication between processes in Linux is as follows:

Pipeline 1 (PIPE) and famous Pipeline (namedpipe): pipeline can be used for communications between kinship processes. Famous pipelines overcome the pipe's no name restrictions. Therefore, in addition to the functions of pipelines, it also allows communication between unrelated processes;

Signal: a signal is a complex communication method used to notify the receiving process of an event. In addition to inter-process communication, the process can also send a signal to the process itself; in addition to the early UNIX signal semantic function Sigal, Linux also supports the signal function sigaction whose semantics complies with the posix.1 standard (in fact, this function is based on BSD, BSD in order to achieve reliable signal mechanism, it can also unify external interfaces and implement the signal function again using the sigaction function );

3. Message Queue: A chain table of messages, including the POSIX Message Queue systemv message queue. A process with sufficient permissions can add messages to the queue. A process with the read permission can read messages from the queue. The message queue overcomes the disadvantages of low signal carrying information, and the pipeline can only carry unformatted byte streams and limited buffer size.

4. Shared Memory: Enables multiple processes to access the same memory space. It is the fastest available IPC format. It is designed to reduce the running efficiency of other communication mechanisms. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and mutual exclusion between processes.

Semaphore (semaphore) is mainly used for synchronization between processes and between different threads of the same process.

Six sets of interfaces (sockets): more general inter-process communication mechanisms, can be used for inter-process communication between different machines. It was initially developed by the BSD branch of the UNIX system, but now it can be transplanted to other UNIX-like systems: both Linux and systemv variants support sockets.

4. Several Methods of thread synchronization.

There are four main methods for thread synchronization: criticalsection, mutex, semaphore, and event.

Their main differences and features are as follows:

1) critical section: accesses public resources or code segments through multi-thread serialization, which is fast and suitable for controlling data access. Only one thread is allowed to access Shared resources at any time. If multiple threads attempt to access public resources, after one thread enters, other threads trying to access public resources will be suspended until the threads entering the critical section exit. After the critical section is released, other threads can be preemptible.

2) mutex: uses the mutex object mechanism. Only threads with mutex objects have the permission to access public resources. Because there is only one mutex object, it can ensure that public resources are not simultaneously accessed by multiple threads. Mutual exclusion can not only achieve the security sharing of public resources for the same application, but also the security sharing of public resources for different applications.

3) semaphores: multiple threads are allowed to access the same resource at the same time, but the maximum number of threads allowed to access the resource at the same time must be limited.

4) Events: The Notification operation is used to maintain thread synchronization, and operations on multiple threads with a higher priority can be conveniently implemented.

5. Thread implementation method (in another way: the difference between the user thread and the kernel thread)

Thread implementation can be divided into two types: user-level thread and kernel-levelthread. The latter is also called a thread or lightweight process supported by the kernel. In a multi-threaded operating system, the implementation methods of each system are different. In some systems, user-level threads are implemented, and in some systems kernel-level threads are implemented.

A user thread is a thread implemented in a user program without kernel support. It does not depend on the core of the operating system, the application process uses the thread library to provide functions for creating, synchronizing, scheduling, and managing threads to control user threads. There is no need for switching between user and core states. The speed is fast, and the operating system kernel does not know the existence of multiple threads. Therefore, a thread blocking will cause the entire process (including all its threads) to be blocked. Because the time slice allocation of the processor here is the basic unit of the process, the execution time of each thread is relatively reduced.

Kernel thread: It is created and abolished by the operating system kernel. Kernel maintenance process and thread context information and thread switching. A kernel thread is blocked due to I/O operations and does not affect the running of other threads.

The following are the differences between user-level threads and kernel-level threads:

1) Kernel support threads are perceptible to the OS kernel, while user-level threads are imperceptible to the OS kernel.

2) creation, cancellation, and scheduling of user-level threads do not require the support of the OS kernel. They are processed at the language level (such as Java; the kernel supports thread creation, cancellation, and scheduling. It must be supported by the OS kernel, which is basically the same as Process Creation, revocation, and scheduling.

3) when a user-level thread executes a System Call Command, the process to which it belongs will be interrupted. When the kernel supports the thread to execute the System Call Command, only this thread will be interrupted.

4) In a user-level thread-only system, CPU scheduling is based on processes. Multiple Threads in a running process are rotated by the user program to control threads; in a system with kernel-supported threads, CPU scheduling is based on threads, and the OS thread scheduling program is responsible for thread scheduling.

5) user-level thread program entities are programs running in the user State, while program entities supporting threads in the kernel are programs that can run in any State.

6. Differences between the user State and the core state.

Before talking about the differences between the user State and the core state, we should first talk about the concept of "privileged level.

Anyone familiar with Unix/Linux systems knows that we call the fork function to create a sub-process. In fact, fork actually completes the process creation function by calling the system. The specific work is implemented by sys_fork. For any operating system, creating a new process is a core function, because it requires a lot of bottom-layer meticulous work and consumes the physical resources of the system, such as allocating physical memory, copy related information from the parent process, copy the page Directory and table of the settings page, and so on. These obviously cannot be done by any program, so the concept of privilege level is naturally introduced. Obviously, the most critical power must be implemented by High-level programs, so that centralized management can be achieved and access and use conflicts of limited resources can be reduced.

The privileged level is obviously a very effective way to manage and control program execution. Therefore, the hardware provides a lot of support for the privileged level. For the cpu Of The intelx86 architecture, there is a total of 0 ~ Three or four privileged levels, with the highest level 0 and the lowest level 3. When executing each command on the hardware, the system checks the privileged level of the command, related concepts include CPL, DPL, and RPL. The hardware has provided a set of privileged-level usage mechanisms, and the software is naturally a matter of good use. This is what the operating system wants to do. For Unix/Linux, only level 0 and Level 3 are used. That is to say, in Unix/Linux systems, a command running at the level 0 privilege level has the highest power that the CPU can provide, A command that works at three levels of privilege has the lowest or most basic power provided by the CPU.

OK. Now that you have an understanding of the concept of "privileged level", you can better understand the differences between user and core states. The kernel state and user State are two operating levels of the operating system. When a program runs on a three-level privileged level, it can be called a running on the user State, because this is the lowest privilege level, it is a privileged level for normal user processes. Most of the programs that users directly face are running in the user State. Conversely, when the program runs on a privileged level 0, it can be called running in the kernel state. Programs Running in user mode cannot directly access the operating system kernel data structures and programs. When we execute a program in the system, most of the time is running in the user State, it switches to the kernel state when it needs the operating system to help complete some work that it has no power or ability to complete. Generally, the following three situations may lead to switching from user to kernel:

1) system call

This is a way for a user-state process to actively switch to the kernel state. the user-state process applies to use the service programs provided by the operating system through system calls to complete the work, such as fork () in fact, a system call is executed to create a new process. The core of the system call mechanism is to use an interrupt that is especially open to users by the operating system, for example, int80h interruption in Linux.

2) Exceptions

Some unknown exceptions occur when the CPU runs a program in the user State. In this case, the current running process is triggered to switch to the kernel-related program that handles the exception, it is also switched to the kernel state, such as a page missing exception.

3) peripheral device interruption

After the peripheral device completes the user request, it sends an interrupt signal to the CPU, at this time, the CPU will suspend the execution of the next command to be executed and then execute the processing program corresponding to the interrupt signal. If the previously executed command is a user-State program, the conversion process naturally changes from user to kernel. For example, after the hard disk read/write operation is completed, the system switches to the hard disk read/write Interrupt Processing Program for subsequent operations.

These three methods are the most important way for the system to switch from the user State to the kernel state at runtime. system calls can be considered as actively initiated by the user process, exceptions and peripheral device interruptions are passive.

7. Differences between user stack and kernel stack.

In the operating system, each process has two stacks, one of which exists in the user space and one kernel stack. When a process is running in the user space, the content in the CPU Stack pointer register is the user stack address and the user stack is used. When the process is in the kernel space, the content in the CPU Stack pointer register is the address of the kernel stack space, using the kernel stack.

The kernel stack is a part of the memory that belongs to the operating system space. Its main purposes are:

1) Save the interrupt scene. For nested interruptions, the field information of the interrupted program is pushed into the system stack in sequence, and the reverse pop-up occurs when the interrupted response is returned;

2) Save the parameters, return values, return points, and local variables of the subprograms called by the operating system.

A user stack is an area of a user's process space. It is used to save the local variables of parameters, return values, return points, and subprograms (functions) called by user processes.

PS: So why don't we use a stack directly? Why waste so much space?

1) if only the system stack is used. The system stack is generally limited in size. If there are 16 interruptions priorities, the system stack size is generally 15 (only 15 low-priority interruptions are saved, and another high-priority interrupt handler is running ), however, user program subprograms may call many times, so that the parameters, return values, return points, and local variables of subprograms (functions) called after 15 subprograms are not saved, the user program cannot run normally.

2) If only user stacks are used. We know that the system program needs to run under some protection, while the user stack is in the user space (that is, the CPU is in the user State, and the CPU is protected when it is in the core State ), protection measures (or rather difficult) cannot be provided ).

8. Memory Pool, process pool, and thread pool.

First, we will introduce the concept of "pooled technology ". Pooling technology simply says: it saves a large amount of resources in advance for future use and reuse. Pooling technology is widely used, such as memory pools, thread pools, and connection pools. For more information about the memory pool, see the memory pool implementation of open source web servers such as Apache and nginx.

In actual applications, memory allocation, process creation, and threads are all designed for some system calls. system calls need to cause the program to switch from user to kernel, which is a very time-consuming operation. Therefore, the memory pool, process pool, and thread pool technologies are usually used to improve the program performance when the program needs frequent memory application release, process creation and destruction, and other operations.

Thread Pool: the principle of thread pool is very simple, similar to the concept of buffer in the operating system. The process is as follows: Start a number of threads and make them all sleep, when you need to open up a thread to do specific work, it will wake up a sleep thread in the thread pool and let it do the specific work. After the work is completed, the thread is sleep, rather than destroying the thread.

The process pool is the same as the thread pool.

Memory Pool: the memory pool is used by the program to apply for a large enough memory from the operating system in advance. After that, when the program needs to apply for memory, it does not apply directly to the operating system, it is obtained directly from the memory pool. Similarly, when the program releases the memory, it does not actually return the memory to the operating system, but the memory pool. When the program exits (or at a specific time), the memory pool releases the applied real memory.

9. The concept of deadlock, the cause of the deadlock, the four conditions that lead to the deadlock, the method to prevent the deadlock, the method to avoid the deadlock

In computer systems, if the system's resource allocation policy is inappropriate, it is more common that the program written by the programmer has errors and so on, it will lead to deadlocks caused by improper competition of resources.

The cause of the deadlock is as follows:

1) system resources are insufficient.

2) The process running sequence is not appropriate.

3) improper resource allocation.

If the system resources are sufficient, all process resource requests can be satisfied, and the possibility of deadlock is very low. Otherwise, a deadlock will occur due to competition for limited resources. Second, the process may run in different order and speed, and may also lead to deadlocks.

Four Conditions for deadlock:

1) mutex condition: A resource can only be used by one process at a time.

2) request and retention conditions: when a process is blocked by requesting resources, it will not release the obtained resources.

3) No deprivation condition: the resources obtained by the process cannot be forcibly deprived before they are used.

4) Cyclic waiting condition: a cyclic waiting resource relationship is formed between several processes that are connected at the beginning and end.

These four conditions are necessary for a deadlock. As long as a deadlock occurs in the system, these conditions must be met. As long as one of the above conditions is not met, no deadlock will occur.

Deadlock relief and prevention:

Understanding the cause of the deadlock, especially the four necessary conditions for the deadlock, can avoid, prevent and remove the deadlock as much as possible. Therefore, in terms of system design and process scheduling, pay attention to how to prevent these four necessary conditions from being established and how to determine reasonable resource allocation algorithms to avoid permanent occupation of system resources by processes.

In addition, it is necessary to prevent the process from occupying resources while waiting. During system operation, it dynamically checks the resource applications that each system can meet by the process, determine whether to allocate resources based on the check results. If the system may experience a deadlock after allocation, no allocation will be made; otherwise, the allocation will be made. Therefore, reasonable planning should be made for resource allocation.

10. process scheduling algorithm.

Several process scheduling algorithms:

1. First-Come service and short job (process) Priority Scheduling Algorithms

1. The service scheduling algorithm is first introduced. The FCFS scheduling algorithm is the simplest scheduling algorithm. It can be used for both job scheduling and process scheduling. The FCFS algorithm is more conducive to long jobs (processes), rather than short jobs (processes ). This algorithm is suitable for CPU-busy jobs, but not for I/O-busy jobs (processes ). 2. Short job (process) priority scheduling algorithm. Short job (process) Priority Scheduling Algorithm (SJ/PF) is an algorithm for priority scheduling of short jobs or short processes. This algorithm can be used for Job Scheduling or process scheduling. However, it is not good for long jobs; it cannot ensure that the urgency jobs (processes) are processed in a timely manner; the job length is only estimated.

Ii. High-priority scheduling algorithm

1. Type of priority scheduling algorithm. The highest priority (fpf) scheduling algorithm is introduced to take care of urgent jobs and get priority processing after they enter the system. This algorithm is often used in a batch processing system, as a job scheduling algorithm, as a process scheduling in multiple operating systems, and can also be used in real-time systems. When used for Job Scheduling, several jobs with the highest priority in the backup queue are loaded into the memory. When it is used for process scheduling, the processor is assigned to the process with the highest priority in the ready queue. At this time, the algorithm can be further divided into the following two types:

1) Non-preemptible Priority Algorithm

2) preemptible Priority Scheduling Algorithm (high-performance computer operating system)

2. Priority type

The core of the highest priority scheduling algorithm is static or dynamic priority, and how to determine the priority of a process.

3. High Response Ratio Priority Scheduling Algorithm

To make up for the shortcomings of the short job priority algorithm, we introduce dynamic priority so that the job priority level increases with the increase of wait time at a rate. This rule can be described as follows: Priority = (Waiting Time + service time required)/service time required; that is, = (Response Time)/service time required

Iii. rotation scheduling algorithm based on time slice

1. time slice rotation method. The time slice rotation method is generally used for process scheduling. During each scheduling, the CPU is allocated to the first process of the team and the team executes a time slice. When the execution time slice is used up, a timer sends a request for clock interruption. The process is stopped and sent to the end of the ready queue.

2. multi-level feedback queue scheduling algorithms multi-level feedback queue scheduling algorithms do not have to know the time required by various processes in advance, it is a well-recognized process scheduling algorithm. The implementation process is as follows:

1) set multiple ready queues and assign different priorities to each queue. In a queue with a higher priority, the execution time slices specified by each process are smaller.

2) When a new process enters the memory, it is first placed at the end of the first queue and queued for scheduling according to the FCFS principle. If it can be completed in one time sheet, it can be evacuated; if it is not completed, it will be transferred to the end of the second queue, waiting for scheduling ...... In this case, when a long job (process) is sequentially routed from the first queue to the nth Queue (the last Queue), it is rotated by the nth queue time slice.

3) The scheduler schedules the processes in the second queue only when the first queue is idle; only when the 1st to the (I-1) queue is empty, the process running in the I queue is scheduled and the corresponding time slice rotation is executed.

4) if the processor is processing a process in the I queue and a new process enters a queue with higher priority, the new queue will seize the running processor, and put the running process at the end of the I queue.

11. Windows Memory Management (Block, page, segment, and segment page ).

Block Management

The primary storage is divided into one large block and one large block. When the required program fragments are not in the primary storage, a primary storage space is allocated to load the program fragments into the primary storage, even if only a few bytes are needed, the program can only be allocated to it. This will cause a lot of waste, but it is easy to manage.

Page Management

The primary storage is divided into one page and one page. The space on each page is much smaller than one page. Obviously, the space utilization of this method is much higher than that of block management.

Segment

The primary storage is divided into several sections, and the space of each section is much smaller than that of one page. This method is much higher in space utilization than in page-based management, however, it also has another disadvantage. A program segment may be divided into dozens of segments, so much time will be wasted on computing the physical address of each segment (I/O is the most time-consuming computer ).

Segment and page management. (Currently commonly used)

It combines the advantages of segment management and page management. The primary storage is divided into several pages, each of which is divided into several sections.

12. Several Algorithms Used for memory continuous allocation and their respective advantages and disadvantages.

The memory allocation methods include single continuous allocation, fixed partition allocation, dynamic partition allocation, and dynamic relocation partition allocation.

Single continuous allocation: it can only be used in a single user or task operating system.

Fixed partition allocation: The storage management mode for running multiple programs.

Dynamic partition allocation: dynamically allocates memory space based on the actual needs of the process.

Relocated partition allocation: a system or user program must be loaded into a continuous memory space.

13. Dynamic and Static links.

Static links directly copy the required Execution Code to the call place when compiling the link. The advantage is that the dependent libraries are not required when the program is released, that is, the Library is no longer required for release, the program can be executed independently, but the size may be relatively large.

Dynamic Links do not directly copy executable code during compilation. Instead, they record a series of symbols and parameters and pass the information to the operating system during program running or loading, the operating system is responsible for loading the required dynamic library into the memory. When the program runs to the specified code, it shares the executable code of the dynamic library that has been loaded in the execution memory, the connection at runtime is achieved. The advantage is that multiple programs can share the same piece of code without storing multiple copies on the disk. The disadvantage is that running loading may affect the program's early execution performance.

14. Basic paging and request paging storage management methods.

The basic paging storage management method has the following features:

1) one time. You must load all jobs into the memory before running them. During each running of many jobs, not all programs and data are used. If all programs are loaded at one time, the memory space is wasted.

2) resident. After the job is loaded into the memory, it will remain in the memory until the job stops running. Although the running process will wait for a long time due to I/O, or some program modules do not need to run once, however, they will continue to occupy valuable memory resources.

Request paging storage management is a common method for implementing virtual memory. It is implemented on the basis of basic paging storage management. The basic idea is: Before the process starts running, it can run only on some pages to be executed, you can call in a request to interrupt the dynamic loading of pages to access but not in the memory. When the memory space is full and a new page needs to be loaded, the user can call up a page according to the replacement function, to free up space and mount a new page. To implement request paging, hardware support is required, including the page table mechanism, page-missing interrupt mechanism, and address translation mechanism.

15. Basic Segmentation and request segmentation storage management methods. (Omitted) 16. Compare the advantages and disadvantages of multipart paging.

Segmentation and paging are two ways to divide or map addresses. The differences between the two are as follows:

1) A page is the physical unit of information, and a page is used to achieve discrete distribution, so as to reduce the external zero header of memory and improve memory utilization. Or, a page is only required by system management, it is not the user's need (it is also transparent to the user ). A segment is the logical unit of information. It contains a set of relatively complete information (such as data segments, code segments, and stack segments ). The purpose of segmentation is to better meet user needs (users can also use it ).

2) The page size is fixed and determined by the system. The logical address is divided into two parts: the page number and the address in the page, which are implemented by the machine hardware, therefore, a system can only have one page size. The length of the segment is not fixed, which is determined by the program you write. Usually, the editing program is divided by the nature of the information when editing the source program.

3) The paging job address space is one dimensional, that is, a single linear space. Programmers only need to use a memory character (represented in hexadecimal form of a linear address) to represent an address. The address space of a segmented job is two-dimensional. When identifying an address, a programmer must provide both the segment name (such as the Data Segment, code segment, and stack segment) and the segment address.

4) pages and segments have a storage protection mechanism. However, the access permission is different: the segment has three permissions: read, write, and execution, while the page has only two permissions: read and write.

17. Several Page Replacement algorithms.

1) Optimal Replacement Algorithm (OPT) (ideal replacement algorithm)

This is an ideal page replacement algorithm, but it is not actually possible. The basic idea of this algorithm is: when a page is missing, some pages are in the memory, and one of them will be accessed soon (the page that also contains the next command ), other pages may be accessed only after 10, 100, or 1000 commands. Each page can be marked with the number of commands to be executed before the page is accessed for the first time. The best page replacement algorithm simply stipulates that the largest page marked should be replaced. The only problem with this algorithm is that it cannot be implemented. When a page is missing, the operating system cannot know when each page is accessed next time. Although this algorithm is not possible, the optimal page replacement algorithm can be used to measure and compare the performance of the algorithm.

2) FIFO algorithm (FIFO)

The simplest page replacement algorithm is the first-in-first-out (FIFO) method. The essence of this algorithm is that it always selects a page replacement with the longest (the oldest) Staying Time in the primary storage, that is, first enters the Memory Page and first exits the memory. The reason is: the page that is first transferred to memory is no longer used than the page that is just transferred to memory. Create a FIFO queue and the shelter has pages in the memory. The replaced page is always on the queue header. When a page is put into memory, it is inserted at the end of the team.

This algorithm is ideal only when the address space is accessed in a linear order, otherwise the efficiency is not high. Because those pages that are frequently accessed often stay in the primary storage for the longest time, they have to be replaced because they become "old.

Another disadvantage of FIFO is that it has an exception, that is, when the storage block is added, the page-missing interruption rate increases. Of course, the page trend that causes such exceptions is rare.

3) least recently used (LRU) Algorithm

The main difference between the FIFO algorithm and the OPT algorithm is that the FIFO algorithm uses the length of time after the page enters the memory as the basis for replacement, while the OPT algorithm uses the page time in the future. If we use the recent past as an approximation of the near future, we can replace the pages that have not been used for the longest period of time in the past. Its essence is that when you need to replace one page, you can replace the pages that have not been used for the longest time in the recent period. This algorithm is called the least-used algorithm (leastrecentlyused, LRU ). The LRU algorithm is related to the last time used on each page. When you must replace a page, the LRU algorithm selects the page that has not been used for the longest time in the past.

18. Virtual Memory definition and implementation method.

Virtual Memory is a technology for memory management in computer systems. It makes the application think that it has continuous available memory (a continuous and complete address space). In fact, it is usually divided into multiple physical memory fragments, some of them are temporarily stored in external disk storage for data exchange as needed. Compared with systems that do not use the virtual memory technology, systems that use this technology make it easier to write large programs and use real physical memory (such as Ram) more efficiently.

19. Four features of the operating system.

1) concurrence)

The two concepts of parallelism and concurrency are both similar and different. Parallelism refers to the occurrence of two or more events at the same time. This is a microscopic concept, that is, physical events occur simultaneously; concurrency refers to the occurrence of two or more events at the same time interval. It is a macro concept. In a multi-program environment, concurrency means that multiple programs run at the same time within a period of time, but in a single processor system, only one program can be executed at each time, therefore, these programs are executed alternately at the micro level. It should be noted that common programs are static entities and they cannot be executed concurrently. In order for the program to run concurrently, the system must create processes for each program separately. A process, also known as a task, is a basic unit that runs independently in the system and acts as a resource allocation entity. Multiple processes can concurrently execute and exchange information. A process needs to run resources such as CPU, storage space, and I/O devices. The purpose of introducing processes in the operating system is to enable concurrent execution of programs.

2) Sharing)

Sharing means that resources in the system can be used by multiple concurrent processes in the memory. Because resources have different attributes, multiple processes share resources in different ways. They can be divided into mutual exclusion and simultaneous access modes.

3) virtual)

It means that a physical entity is converted into several logical counterparts through technology. In the operating system, the virtual implementation mainly uses the time-sharing method. Obviously, if n is the number of virtual logical devices corresponding to a physical device, the speed of the virtual device must be 1/N of the physical device.

4) asynchronism)

In a multi-channel programming environment, concurrent execution of multiple processes is allowed. Due to resource restrictions, process execution is generally not "in one breath ", instead, it runs in the form of "Stop and stop. In the memory, it is unpredictable when each process is executed, when it is paused, how it is pushed forward, and how long each program takes to complete. Or, the process runs in one step. However, as long as the running environment is the same and the job is run multiple times, the same results will be obtained.

20. DMA

Direct Memory Access (DMA) is a memory access technology in computer science. It allows some computer hardware subsystems (computer peripherals) to read and write system memory independently without bypassing the central processor (CPU ). At the same level of processor burden, DMA is a fast data transmission method. Many hardware systems use DMA, including hard disk controllers, graphics cards, NICS, and sound cards.

21. spooling

Spooling, short for simultaneousperipheraloperationon-line, is a technology used to exchange information between a slow character device and a computer host, it is usually referred to as the "fake offline technology ".

22. Storage allocation methods and advantages and disadvantages.

1) continuous allocation

Continuous allocation: When a file is created, a group of consecutive blocks are allocated. In fat, each file only needs one item, indicating the length of the starting block and the file. It is advantageous for sequential files.

Advantages:

Simple. Applicable to one-time write operations

Support sequential access and random access, fast sequential access

The required disk seek times and seek time are the least (because of the continuity of space, when you access the next disk block, you generally do not need to move the head. When you need to move the head, you only need to move one track.

Disadvantages:

The file cannot grow dynamically (it is possible that the empty block at the end of the file has been allocated to another file)

Not conducive to file insertion and Deletion

External fragmentation issues (after file addition and deletion) make it difficult to find continuous blocks with sufficient space. Tightening

Declare the file size when creating the file.

2) chain allocation

Chain allocation: information of a file is stored in several discontinuous physical blocks. Each block is connected by a pointer. The previous physical block points to the next physical block. Each file in fat also only needs one item, including the file name, start block number, and last block number. Any free block can be added to the chain.

Advantages:

This improves disk space utilization and does not cause external fragmentation issues.

Facilitating file insertion and Deletion

Facilitate dynamic File expansion

Disadvantages:

Slow access speed. Generally, it is only suitable for sequential access to information, not suitable for Random Access: to find a block, you must start from the beginning along the pointer.

Reliability problems, such as pointer errors, more seek times and seek time

The link pointer occupies a certain amount of space. It makes multiple blocks into a cluster and distributes them by cluster instead of by block (increasing disk fragments ).

3) index allocation

Index allocation: each file has a first-level index in fat, and the index contains the entries for each partition allocated to the file. The file index is stored in a separate block. In fat, the file entry points to this part.

Advantages:

Maintains the advantages of the Link Structure and solves its disadvantages: block-based allocation can eliminate external fragments, and variable-size partition-based allocation can improve locality. Index allocation supports sequential access to files and direct access to files, which is a common method.

Meet the requirements of dynamic file growth and insertion and deletion (as long as there are idle blocks)

It can also make full use of external storage space

Disadvantages:

A large number of seek times and seek time.

Index tables bring about system overhead, such as internal and external storage space and access time.

If this article is useful, please join us! Thank you!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.