1. The operating system is a computer system software, which unifies the management of computer system resources and control program implementation.
2.OS several major features (1) concurrency (concurrence), shared (sharing), virtual (Vsan), asynchronous (Asynchronism). (2) One of the most basic features is concurrency and sharing.
3. The goal of modern OS design is convenience, effectiveness, expandability and openness.
4. Batch processing operating system? The user prepares the instructions for executing the program, data, and control job, which the operator enters into the computer system for processing. The operating system selects the job and controls the execution of the job according to the requirements of the Operation manual. The operating system that uses this batch processing job is called a batch operating system.
5. What are the restrictions on the use of privileged directives?
Only the operating system is allowed to use privileged directives, and user programs cannot use privileged directives.
6. Why is it that batch multi-channel systems can greatly improve the efficiency of computer systems?
① Multi-channel operations work in parallel, reducing the idle time of the processor.
② Job scheduling can reasonably choose to load the main memory of the job, make full use of computer system resources.
③ no longer accesses low-speed devices during job execution, and accesses fast disk devices directly, reducing execution time.
④ job batch input reduces the handover time from operation to job.
7. What interfaces does the operating system provide for users?
The operating system provides users with two types of interfaces for use:
One is the operator-level, it provides users with the means to control the execution of operations;
The second is the programmer level, which provides the service function for the user program.
8. What is the description and advantages of micro-kernel operating system?
A: Description: A small enough kernel, based on the client/server model, the application of "mechanism and strategy" separation principle, the adoption of object-oriented technology.
Advantages: It improves the scalability of the system, enhances the reliability and portability of the system, provides support for distributed systems, and incorporates the object-oriented technology.
9. The user interface of the operating system is: Command interface, program interface, graphics interface
10. Classification of the operating system: Serial processing, simple batch processing, multi-channel batch processing, time-sharing system 11. What is the benefit of multichannel multi-channel processing when the operating system and multiple user programs are accommodated in the memory space (>2), when a job needs to wait for I/O, The processor can switch to another job that may not need to wait for I/O, a process called multiprogramming.
Improved processor utilization, average resource utilization, and improved throughput and response times
12. What is the control block created and managed by the operating system, the ability to store a process-related identifier, status, priority, PC, context data and a series of information data structure, is the operating system is a key tool to support multi-process
13. The difference between process and procedure ① procedure is static and the process is dynamic;
The ② process can describe concurrency more realistically, and the program cannot;
The ③ process has the ability to create other processes, and the program does not
④ process is only a process of execution, life cycle, and the program can be used as a software resource for long-term preservation, is relatively long;
The ⑤ process is an independent unit of system allocation scheduling and can be executed concurrently with other processes.
14. The primitive is composed of several directives, a process used to accomplish a certain function, which differs from the general process in that they are the atomic operations ‖, which is an indivisible basic unit that is not allowed to break during execution.
15. Process mutex: Because each process requires shared resources, and some resources are mutually exclusive, the processes compete to use these resources, and the process's relationship is mutually exclusive to the process.
Process synchronization: During concurrent execution, multiple processes that work together to complete the same task, at the execution speed
Or certain timing points must be coordinated with each other, this restrictive relationship is called process synchronization.
16. The event that caused the creation process and the creation process? Events: (1) User login, (2) Job scheduling, (3) Provision of services, (4) Application request (self-created process). Process: (1) Apply blank PCB, (2) Allocate resources for new process, (3) Initialize Process Control block, (4) Insert new process into ready queue.
17. Cause the process to terminate the event? (1) normal end; (2) Abnormal end (cross-border error, protection error, illegal instruction, privileged command error, run timeout, wait timeout, arithmetic error and I/O failure);
(3) External intervention (operator or operating system intervention, parent process request, and parent process termination).
18. What is a thread? What are the advantages of multithreading technology?
A thread is a child task that can be executed independently in a process, a process can have one or more threads, and each thread has a unique identifier. There are many similarities between threads and processes, often referred to as "light processes", the fundamental difference between threads and processes is that processes are allocated as resource units, while threads are dispatch and execution units.
Multithreading technology has several advantages:
① creation speed and low system overhead: Creating threads does not require additional resources to be allocated;
② communication is concise, the information transmission speed is fast: the communication between threads in the unified address space process, no need for additional communication mechanism;
③ High parallelism: Threads can execute independently, and can take full advantage of the ability of the processor to work in parallel with the peripheral devices.
19. Describe the characteristics of a process?
(1) Structural Characteristics (Program section, Data section and PCB), (2) dynamic, (3) concurrency, (4) Independence, (5) asynchrony. Process is the process entity running process, is a system of resource allocation and scheduling of an independent unit.
20. What is the reason for introducing a suspend (suspend) state?
(1) Request of the end user (2) parent process request (3) Load regulation required (4) the operating system needs.
21. Try to compare the different points between the process scheduling and the job scheduling.
① job scheduling is a macro-scheduling, which determines which job can enter main memory. Process scheduling is a micro-dispatch, which determines which process in each job occupies the central processing unit.
② job scheduling is to select the conditions of the receiving State operation loaded into main memory. Process scheduling is a processor that is selected from the Ready state process.
Advantages and disadvantages of 22.ULT and KLT ult, the creation or termination of user-level thread process threads, the kernel is unknown, the kernel is in the process as the dispatch unit, and the development of a process to specify the state
Advantage: Thread switching does not require kernel-state privileges. can have its own scheduling algorithm, without involving changes in the operating system of the basic scheduling algorithm. Disadvantage: One thread in one application is blocked and the remaining threads are blocked. Although the kernel assigns a processor to a process, it actually handles only one thread
KLT, all the work on thread management is done by the kernel, and the application part does not have thread-managed code such as line libraries, and the scheduler is thread-based. Advantage: Multiple threads of a process can be dispatched to multiple processors one thread is blocked and the processor can be dispatched to another thread. Cons: Switching requires a change in kernel state
23. Concurrency: In the operating system, several programs in a period of time are running until the run is complete, and the programs are running on the same processor, but only one program runs on the processor at any one moment.
24. Competitive conditions: When multiple threads or processes read and write to a shared data, the results depend on the relative time of their execution
25. Semaphore: An integer value used for inter-process pass-through information
26. Mutex when a process accesses shared data in a critical section, other processes cannot access any shared resources in that critical section
27. The critical section is a piece of code in which the process accesses the shared resource
28. Synchronization refers to the relative relationship between two or two different amounts over time in the process of change
29. Deadlock two or more than two processes, where each resource waits for another process to work on something that cannot continue to execute
30. Live lock two or more than two processes in response to changes in other processes to constantly change their state (do not abandon the CPU, try-fail-try) but do not do useful work situation
The body of a live lock is constantly changing state, while the body in deadlock is represented as waiting
Two threads are re-executed after a collision with certain conditions, so if there is a collision after a second attempt, this continues
31. The signal volume of the two-yuan signal: only 0 and 1 of the value of the signal volume
Count Semaphore: Is the amount of signal that can take a lot of values, integers, negative numbers, non-two-yuan semaphore
Strong semaphore: The longest-blocked process is first released from the queue
Weak semaphore: No semaphore that specifies the order in which the process is removed from the queue
32. Starvation refers to a running process that, while capable of continuing execution, is ignored indefinitely by the scheduler and cannot be scheduled to execute
33. Describe the three ways to solve the deadlock problem.
① the prevention of deadlocks. The system allocates resources for the process according to a predetermined policy, which causes one of the four necessary conditions for deadlocks to not be established, thus causing the system to have no deadlock.
② the avoidance of deadlocks. The system dynamically tests the allocation of resources and allocates resources to the process only if the system is secure.
Detection of ③ deadlock. There is no limit to the application and allocation of resources, as long as there are remaining resources to allocate resources to the applicant, the operating system should be timed to determine whether the system has a deadlock, when there is a deadlock occurred to try to release the deadlock.
④ Unlocking the deadlock
34. Ways to prevent deadlocks
A. Abandoning the "request and hold" condition B. Abandoning the "no deprivation" condition c. Abandoning the "loop wait" condition
35. Three necessary conditions for deadlocks: mutual exclusion, only one process at a time can use one resource
Occupy and wait, while a process waits for other resources, continues to occupy the resources currently allocated
Cannot seize, cannot forcibly occupy the resources which the process already occupies
36. Four sufficient conditions for deadlocks: Mutex: A resource can only be used by one process at a time.
Request and hold condition: When a process is blocked by a request for resources, it remains in place for the resources that have been obtained.
Non-deprivation: the resources that the process has acquired cannot be forcibly stripped until the end of use.
Cyclic wait condition: a cyclic waiting resource relationship is formed between several processes.
37. Reusable resources means that only one process can be used safely at a time and is not exhausted by use, and that consumption of resources refers to resources that could be created (produced) and destroyed (consumed)
38. Memory Management Requirements Fixed partitioning: internal fragmentation. Dynamic partitioning: External fragmentation. Simple paging: Each process is divided into pages with pages that are equal in size and loaded into a process that requires all the pages contained in the process to be loaded, not necessarily contiguous, without internal fragmentation; Simple segmentation: Segment length not necessarily, external fragment
Virtual memory paging: Except that you don't need to load all pages, the rest is the same as simple paging
Virtual memory Fragmentation: Except that you don't need to load all segments, the rest is the same as a simple segment
Relocation: The process of translating a program's logical address into a physical address
39. Differences between internal fragments and external fragments
Internal fragmentation: Due to the fixed-length split area, when a program is imported into a fixed-size area, if the program size is smaller than the split area, there will be space left, resulting in internal fragmentation
External fragmentation: Because the program is constantly loaded and replaced, so that the memory space is allocated to a number of discontinuous chunks, the remaining space due to the address is not continuous loading process execution, resulting in external fragmentation
An external fragment is a space that has not been allocated, because it is too small to be allocated to new processes that request new space, the fragments are not part of any process, and the internal fragmentation is within the zone, and the process that occupies the page does not use it, but the system cannot use it until the process frees it.
40. The difference between a physical address and a logical address is a logical address used in the user program, while the physical address refers to storing information in bytes in memory, in order to properly store or obtain information, each byte unit gives the unique memory address.
41. What is compression compaction is the external fragment of the customer service, the operating system will move the process from time to times, making the process occupy a continuous space, and make all the free space into one piece.
42. Several substitution strategies: opt: Best, select the page with the longest current time of the next time. LRU: Least recently used, last use of the page FIFO from the current farthest. Clock: The clocking algorithm is the one use bit of each page box, choose to use the position of 0 replacement, loop buffer
43.TLB conversion detection buffer for improved translation speed of virtual address to physical address
The page is not in memory, it needs to call the operating system, the operating system is responsible for loading the required pages
44. Long-range scheduling, medium-term scheduling, short-haul scheduling: long-distance control room decide which program can be processed into the system, from the external memory into memory, is to allow a job or program into a process. If the created process starts out as a ready state, wait for a short-range dispatch, and wait for the intermediate dispatch if it starts in a blocking state.
Medium-range scheduling is part of the switching function. Swapping is in the vicinity of all processes in the blocking state, the OS can put one of the processes in the suspended state, and transferred to the disk, this is a swap operation, the swap operation is called into memory in a previously suspended state of the process.
A short-range dispatch is a precise decision on which process to execute next
45. Advanced scheduling and low-level scheduling differences advanced scheduling is also known as job scheduling, scheduling object is a job, job scheduling often occurs in a (batch) job run completed, exit the system, and need to re-transfer into a (batch) job into memory, so the cycle of job scheduling long; Low-level scheduling is also called process scheduling, Scheduling objects for the process (or kernel-level threads), process scheduling is the most frequent, is the most basic kind of scheduling, multi-channel batch processing, ticks, real-time three class OS must be configured such scheduling.
46. Preemption and non-preemption means that once the program is in a running state, it continues to execute until it terminates or because some requests such as waiting I/O are blocked, and preemption means that the current program may be interrupted by the operating system and transferred to the ready state.
47. Several short-range scheduling fcfs, non-preemption, first come first service rotation, preemption, time slicing, based on FCFS. SPN, non-preemption, shortest process first, next time to select the shortest processing times. SRT, preemption, the shortest remaining time, always directly select the shortest remaining time. Hrrn, non-preemption, the shortest response than priority has a problem feedback, preemption, the priority of each process in a period of time, the execution of a lower level, the process has just started to enter memory the highest priority.
48.I/O Buffer Single buffers: when the process issues an I/O request, the operating system assigns the operation a buffer that is located in the part of the system in memory, the input data is put into the buffer, and the process moves the data to the user space when the transfer data is completed, and then immediately requests another buffer.
Dual buffer and cyclic buffering
Buffering: Improving operating system efficiency and performance of individual processes
49. Disk scheduling policy rss, random scheduling FIFO, fair sstf, the shortest service time first, utilization high queue small scan, first scan to the fixed side and then back to the origin of the C-scan, first scan towards the fixed side and then return to the original point in the opposite direction
50. Seek time: The time required for the head arm to locate the track. Rotation delay: The time at which the head reaches the specified sector position. Access time: The time required to reach the read or write position. Transfer time: The time required for data transfer. t= seek time +1/2r+b/rn R is the rotation speed n is the number of bytes in the track
51. A file is a collection of related information, defined by the creator, that has a file name. According to the logical structure of the file can be divided into structured files and unstructured files.
52. File organization Five ways: heap: Each record consists of a string of data, in the order of arrival is collected, no structure, find files through the exhaustive mode, sequential files: All records the same length, format fixed, according to the order of key fields organized. Direct or hash file: Each continues to require a key field, and the direct file is a keyword-based hash
53.FAT, file allocation table Allocation
54. What type of allocation is used for exclusive devices? Exclusive devices are usually statically distributed.
That is, before a job is executed, the type of equipment to be used by the job is assigned to the job, which is occupied by the job during the execution of the job until the end of the job execution is returned.
55. Main differences in segmentation and paging
A. Both pagination and segmentation are distributed in a discrete way, and the address mapping mechanism is used to realize the transformation of addresses, which is their common denominator;
B. There are three different points for them, first, from the functional point of view, the page is the physical unit of information, paging is for the realization of discrete distribution, to reduce the amount of memory, improve the utilization of memory, that is, to meet the needs of the system management, rather than the needs of the user; The segment is a logical unit of information that contains a set of information that is relatively complete to The needs of the full user;
C. The size of the page is fixed and determined by the system, while the length of the segment is not fixed, it depends on the program written by the user;
D. The paging job address space is one-dimensional, while the segmented job address space is two-dimensional.
The difference between 56.DMA and interrupt control: the same points are all transferred in blocks. The difference is:
1) CPU Processing Interrupt Time: Interrupt control mode: The data buffer register is full after the CPU is required to interrupt processing DMA mode: is required to transfer the data block at the end of the full transmission of the request CPU
The process is broken. This greatly reduces the number of times the CPU interrupts processing.
2) The completion of the data transmission: Interrupt control mode: is the interrupt processing by the CPU-controlled DMA mode: is the DMA controller completed.
57.I/O Interrupt handling process? (1) Wake up the blocked drive (program) process, (2) protect the CPU environment of the interrupted process, (3) transfer to the corresponding device processing program, (4) Interrupt processing, (5) Restore the site of the interrupted process.
58. What are the characteristics of the device driver? (1) The driver mainly refers to the process of requesting I/O and a communication and conversion process between the device controller, (2) the driver is closely related to the hardware characteristics of the device controller and the I/O device, and (3) the driver is closely related to the I/O control mode used by the I/O device; (4) Because the driver and hardware are closely related, so part of it must be written in assembly language; (5) The driver should allow overridable.
59. What is the process of the device driver?
(1) Translating the abstract requirements into specific requirements, (2) checking the legality of I/O requests, (3) reading and checking the status of the equipment, (4) transmitting the necessary parameters, (5) Setting up the mode of operation and (6) starting the I/O device.
Operating system 2015 (Sichuan University Software Academy)