Summary of basic concepts of operating system learning

Source: Internet
Author: User

Operating system is a computer professional must learn one of the major courses, as well as a lot of postgraduate examinations of students are required to test the course. This post is the basic knowledge point of the operating system I organized during my college years. Enough to meet the final exams of the college age.

1. Four aspects of storage management research topics:

(1) Storage allocation problem: The focus is on storage sharing and various allocation algorithms

(2) Address relocation problem: Research address transformation mechanism, and dynamic and static repositioning method

(3) Storage protection issues: research methods to protect various programs and data areas

(4) Storage expansion problem: Research Virtual storage area and scheduling algorithm

2. A space in a program that consists of a symbol name is called a namespace.

3. The relative address is also called the logical address or the virtual address, the space of the program consists of the relative address is called the logical address space. The logical address space is converted to an absolute address space by an address relocation mechanism. The absolute address space is also called the Physical address space.

4. When a program of a logical address space is loaded into the physical address space, the address transformation, or address mapping, or address relocation, is required due to the inconsistency of two spaces.

5. Address relocation Two ways:

(1) Static repositioning

(2) Dynamic repositioning

6. Static repositioning is the relocation of the address before the program executes. It is usually done by the assembly program. No hardware support is required.

7. Static repositioning Disadvantages:

(1) The program can no longer move after repositioning, can not reallocate memory.

(2) Storage space can only be allocated continuously, and cannot be distributed in different areas of memory.

(3) It is difficult for users to share the same program.

8. Dynamic address repositioning is performed before each storage access during program execution. to hardware support.

9. Dynamic Repositioning Advantages:

(1) The program can be moved in memory.

(2) The program does not need to be stored continuously, can be in different areas of memory.

(3) You can share programs.

10. Automatic coverage: Large operating procedures in the execution, there is a part of the address space in main memory, the other part in the secondary storage, when the access information is not in main memory, by the operating system from the secondary storage transferred into main memory.

11. Virtual storage is actually an address space.

12. The maximum capacity of a virtual memory is determined by the address structure of the computer.

13. The virtual storage capacity can be larger than the real deposit, or it can be smaller than the real deposit. A system can establish a virtual memory for each user, and each user can program within their own address space (maximum capacity is virtual memory capacity).

14. Early Storage Management:

(1) Single continuous distribution

(2) Partition allocation

15. Page: Logical address space divided by equal slices.

16. Block: The physical address space is divided into slices of the same size.

17. All the pages of a logical address space for a job are contiguous, and the blocks that are transformed into physical storage spaces can be non-contiguous.

18. Address Change mechanism:

(1) Dynamic address change mechanism DAT

(2) High-speed page transform register

(3) Lenovo register (fast table)

19. Page table in main memory, managed by the operating system. Each instruction must be executed with an address transform.

20. In Lenovo memory, the most common page number and corresponding block number of the running job are stored, and there is parallel query capability.

21. Paging Management table (data structure) to be established:

(1) Activity Table JT

(2) Storage of block table MBT

(3) Page conversion Table PMT

22. Work table. A table for the entire system, each of which corresponds to a table in the job table, including the page table start address, page table length, and status information for the job.

23. Store the chunked table. The entire system is a table, a table that corresponds to a storage block, and records whether the block is allocated.

24. Page Transform table: A job a table. A table that corresponds to a page.

25. Paging storage management does not address storage expansion issues. When a job cannot be fully loaded into memory, it cannot be run.

26. Virtual page: page for job address division

27. Real page: Main memory is called the real memory, the real memory block.

28. Request Paging Storage Management: Paging storage Management loads the required pages according to the request.

29. The hardware generates a missing pages interrupt. Transfer to interrupt handler.

30. When the processor executes an instruction, it first forms a valid address for the operand, and then calculates the page number to check whether the page table is in the actual memory. As in, then the address transformation, according to the transformed address to take out the operand, the completion of the function of the instruction, and then continue the execution of the next instruction. If not, cause a missing pages interrupt, into the interrupt handler.

31. Page out: A page is moved to a secondary store

32. Entry: A page is transferred from the secondary to the existing

33. Jitter, System bumps, repeated page and page entry, wasting a lot of processor time.

34. Various page replacement algorithms! Do the problem!

35. The program should have a high degree of localization. This way, the program can often focus on several pages to access, reducing the number of page breaks.

36. To run the program effectively, the number of pages in main memory should not be less than half the total number of pages.

37.2 Visits: Paging storage management, request paging storage management, segmented storage management

38.3 Visits: section-page Storage Management

39. Understanding Segmented Storage Management, Segment-page storage Management

Windows NT uses page scheduling algorithms that request paging storage management and FIFO.

The page size of Windows NT is 4K.

The Windows NT Virtual Storage hypervisor is one of the important components of NT execution and is the basic storage management system for Windows NT.

Windows NT runs on more than 386 of the 32-bit microcomputers, so each process has a virtual address space of 4GB (2 32 times).

44.4GB of the virtual address space is divided into two parts. High-Address 2GB is reserved for the system, and the low-address 2GB is the user's storage area, which can be accessed by user-state and kernel-mentality threads.

45. The system area is divided into three parts:

(1) The top part of the fixed page area, called the non-paging zone, is used to store pages that never swap out memory.

(2) The second part is the paging area, which holds the system code and data of the very resident memory.

(3) Finally, the direct mapping area, directly exchanged by the hardware, these pages of permanent memory will never expire.

46. The implementation of virtual storage management includes two aspects:

(1) Address change mechanism

(2) Page scheduling policy

1. A file is an ordered sequence of a set of associated elements with a symbolic name.

2. The "element" in the file is the smallest information item (Word or byte) that can be addressable.

3. A file consists of a number of minimum units called logical records. A record is a meaningful collection of information that acts as the basic unit for accessing files.

4. The length of each record of a file can be equal or unequal.

5. Slow character device is also a file, such as keyboard input file, printer file

6. The software organization responsible for managing and accessing file information in the operating system is called a file management system, referred to as file system.

7. The file system consists of three parts

(1) Software related to file management

(2) Managed documents

(3) Data structures required for the implementation of file management

8. Benefits for users after adding a file Management section to the operating system:

(1) Ease of use: Access is achieved by name.

(2) Data security: Provide protection measures to prevent unintentional destruction of documents.

(3) The unity of the interface: You can use unified generalized instructions or system to access files on a variety of media.

9. Documents are classified by nature and use:

(1) System files

(2) Library file

(3) User files

10. System files: Not directly open to the user, only through the system call for the user Service.

11. library file: Allows the user to call, but does not allow user modification.

12. User files: Files saved by the user-commissioned operating system.

user files are classified according to the usage situation:

(1) Temporary documents

(2) Archive file

(3) Permanent files

13. The form of protection according to the document is divided into:

(1) Read-only files

(2) Read and write files

(3) Do not protect files

14. The flow of information by file is divided into:

(1) Input file: such as keyboard input file, can only enter

(2) Output file: such as printer file, can only output

(3) Input and output files: Files on disk, tape, readable and writable

15. The important role of the file system is to establish a mapping between the user's logical files and the physical files on the corresponding device to achieve the conversion between the two.

16. The file access method is determined by the nature of the document and the requirements of the user's use of the document.

17. Two logical structures of the file:

(1) structured, documented documents

(2) Non-structured streaming files

18. Recorded documents are divided into:

(1) Fixed length record file

(2) Variable length recording file

19. Each block is called a physical block, and the information in the block is called the physical record.

20. Physical structure of the file

(1) Continuous structure

(2) Series structure

(3) Index file

(4) Hash file

21. If the information of a logical file is stored in an adjacent physical block on the file memory, it is said that the file is a continuous file (also known as a sequential file), which is a continuous structure.

22. The tandem structure is also called the link structure. The disadvantage is that it is only suitable for sequential access and is not easy to access directly.

23. The index file requires the creation of an index table for each file, each of which indicates the physical block number where the logical record of the file resides.

24. The index table is automatically created by the system when the file is established and placed on the same file volume as the file.

25. The physical block that holds the index table is called the Index table block.

26. How multiple indexed blocks are organized:

(1) Serial file mode

(2) Multi-index method

the physical structure of a. UNIX file is a multi-index structure, and the logical structure is a streaming file.

Hash method , also known as hash method, hash method

Hash method Different key values after the calculation, you may get the same key value, this phenomenon is called "address conflict."

30. Address conflict resolution is called overflow processing technology, which is the main consideration in designing Hash files. Methods are: Sequential exploration method, two-time hashing method and so on.

31. The file access method refers to the method of reading and writing a physical block on the file memory.

32. File access Method:

(1) Sequential access method

(2) Direct access method

(3) Key access method

33. In systems that provide a record-based file structure, sequential access is accessed strictly in the order in which the physical records are arranged. If the currently accessed record is i, the next record to be accessed is automatically i+1.

34. The direct access method is suitable for the index table.

35. Key access method is accessed according to the contents of each record in the file.

36. The physical structure of the file depends on:

(1) Characteristics of file memory

(2) Access method

37. If the direct access method is used, the index file is most efficient, the continuous file efficiency is centered, and the concatenation file is the least efficient.

38. The multi-level directory structure is also called the tree-type directory structure.

39. In a tree-shaped directory, the root node is called the root directory, which is called a subdirectory, and the leaf node is called the information file.

40. Both the root directory and the subdirectory are files, called catalog files.

41. Two ways to represent file names in a tree-shaped directory structure:

(1) Absolute path name

(2) Relative path name

42. Absolute pathname always starts from the root directory and is unique. If the first character of the pathname is a delimiter, then the path is the absolute path.

43. Note the "point" and "dot" notation.

each directory entry in the directory structure used by UNIX contains: A file name and an I node number.

45. The basic idea of a bit diagram is to make a graph with a number of bytes, each of which corresponds to a physical block in the file memory.

46. The bit diagram can reflect the allocation of disk blocks, also known as the disk diagram.

47. The bit diagram is saved in memory.

48. Ways to implement file sharing:

(1) Sharing with the same name

(2) sharing with another name

49. Share with the same name: Each user accesses a file by using the same file name, including its path.

50. Different name sharing: Each user accesses a file using their own file name.

51. The method used for sharing the name is called the hook of the file.

52. Two ways to implement hook:

(1) Share based on the index node (hard connect)

(2) sharing based on symbolic chain (soft connection)

53. Read your own book P155

54. Piping is a special file that is a special open file.

55. Piping components:

(1) A external memory index node

(2) The corresponding Memory index node

(3) Two system Open File table

56. After the process creates a pipeline file, it then creates one or more child processes. The child process inherits all the open files of the parent process, and the pipeline files created by the parent process are shared by the child process.

57. The pipeline file is a temporary file that implements inter-process communication as a disk intermediary, which is slower to communicate than memory. Suitable for communication between parent and child processes only.

58. Pipeline file to solve two special problems: Pipeline file read and write synchronization and mutual exclusion.

59. When the process writes data to the pipeline, when the data being written is greater than the specified length, the write process is suspended until the data is taken away by the read process, and then the process is awakened, and when the read process reads data from the pipeline, the read process should be suspended when the data in the pipeline is read, and the read process is woken when the

60. When you read and write the mutex, prevent several processes at the same time to read and write the pipeline files, before implementing the operation should be locked.

61. Access the control matrix in memory. The access rights of the file are compared with the user's access, and if they are not consistent, access is denied.

62. System calls that the file system provides to the user:

(1) Create / Delete

(2) Open / close

(3) Read / write Files

1. The IO device is divided by the usage characteristics:

(1) Storage device

(2) input

(3) Terminal equipment

(4) Offline equipment

2. The IO device is divided by its affiliation:

(1) System equipment: printer, disk, clock

(2) User equipment

3. The IO device is divided by the resource allocation angle:

(1) Exclusive devices: Most low-speed IO devices. Printer

(2) shared device:

(3) Virtual appliance: Theoriginal exclusive device is transformed into a device that can be shared by several processes through spooling (spooling) technology.

4. IO devices are divided by the number of transmitted data

(1) Character device: transmits data in bytes. Printer

(2) block device: transmits data in data block units. Disk

5. A disk with only one platter is called a diskette. A disk consisting of several platters is called a hard disk.

6. The disk consists of two parts:

(1) Rotating body

(2) reading and writing pen

7. Track: The access arm moves to a fixed position, and the corresponding head is drawn on the disk in a circle.

8. Each magnetic prop has the same number of sectors, and each sector has the same number of bytes.

9. The physical address of the physical block consists of three parts:

(1) Cylinder number

(2) Track number

(3) Physical record number

10. The time to read and write each sector is the same.

One. IO control mode:

(1) Cyclic IO test mode

(2) Program interrupt IO mode

(3) DMA mode

(4) Channel mode

12. Cyclic IO test method,CPU time is used in waiting for input, output and cycle detection, the efficiency is very low.

13. The program interrupts the IO mode and interrupts the CPU only if the io operation is normal or ends abnormally . Achieve a certain degree of parallelism.

DMA Mode , the block device supports DMA mode.

15. The device controller consists of three parts:

(1) Device controller and CPU Interface: Data cable, Address line, control line

(2) Device controller and device interface: data signal, control signal, status signal

(3) IO logic

16. With the channel structure of the computer system, main memory, channel, controller, equipment using four-level connection, three-level control.

17. The IO operation processafter the channel: when theCPU Executes the user program, such as an io request, it uses io The command starts the selected device on the specified channel, and once it is successfully started, the channel starts to control the device for operation. When the device io operation is completed, the channel issues io, ends the interrupt,theCPU stops the current work, and turns to the interrupt service program.

18. The channel is divided into three categories according to the information Exchange Mode and the connecting device:

(1) byte multi-channel

(2) Select Channel

(3) Array multichannel

19. Byte multichannel: Set up to connect a large number of slow devices. Cross work in bytes. Once a device transmits one byte, it immediately sends a byte to another device.

20. Select channel: Connect the fast device. Work in a group way. Serve only one device at a time.

21. Array Multichannel: One channel command is executed first for one device, then automatically converted, and a channel command is executed for another device.

22. Because of the high cost of the channel, the channel is far less than the equipment.

the instructions for the. IO processor are called Channel commands. A channel command is called a channel command word (CCW). A program written with a channel command is called a channel program, also called an IO program. The process of writing channel programs is called Channel programming or IO programming.

24. Access two fixed memory units during channel input and output: Channel address word (CAW), channel status word (CSW)

25. The input and output instructions are the commands of the central processing Unit, all of which are privileged instructions and can only be operated in a tubular state.

26. When the user program requires the transfer of data between main memory and IO devices, the IO requirements are presented to the operating system in the form of generalized instructions or system calls in the user program . In this way, the status of the processor is entered into the Tube state by the calculation, and the system program running under the control can use the IO instruction.

between the CPU and the channel is a master-slave relationship. the CPU is the primary device, and the channel is from the device.

the method of communication between the CPU and the channel is:

(1) The CPU sends IO instructions to the io channel , commands the channel to work, and checks its operation.

(2) The channel is reported to the CPU in interrupt mode, waiting for CPU processing.

objectives of the IO software design:

(1) Device independence

(2) Error handling

(3) Synchronous asynchronous transmission

(4) handling the IO Operations of exclusive devices and shared devices

30. Errors should be handled in a location close to the hardware. High-level software is only notified if the underlying software is not able to handle it.

The IO system should be organized into 4 levels:

(1) Interrupt handling procedure (minimum)

(2) device driver

(3) device-independent IO software

(4) User space IO Software (max)

32. The interrupt handler is located at the very bottom of the IO system. When the process requires an io operation, the operating system suspends the process, which is blocking until the end of the IO operation causes an interrupt. When an interrupt occurs, the interrupt handler performs the appropriate action to unblock the corresponding process.

33. Each device driver processes only one device or a class of closely related devices.

34. The blocked driver must be awakened by an interrupt.

35. Device-independent IO software:

(1) Device naming

(2) Equipment protection

(3) device-independent block size

(4) Data buffering

(5) Allocation of data blocks

(6) Allocation and release of exclusive equipment

(7) Error handling

36. Buffering techniques include input buffering and output buffering.

37. Input buffering: The operating system has already read data from the device into the system store before the user process needs the data.

38. Output buffering: The operating system first writes the data to be output to the system buffer, and when the process continues to run, the data is sent to the device output.

39. The buffer is divided by the mode of use:

(1) Dedicated buffer

(2) Universal buffers

40. Buffer pool: Common buffering technology

41. The buffer pool should have four working buffers:

(1) working buffer for hosting input data

(2) working buffer for extracting input data

(3) working buffer for hosting output data

(4) working buffer for extracting output data

42.4 ways to work with buffer pools

(1) Admission input

(2) Extract input

(3) Reception output

(4) Extract output

43. The operating system's management of block devices typically uses buffer pooling technology. In order to improve the reading and writing efficiency of block devices, pre-reading and delay writing techniques are widely used in the operating system.

44. Disk drive Scheduling is the first move arm scheduling, and then the rotation scheduling.

45. Focus: ARM scheduling algorithm!

46. 4 types of data structures for device allocation:

(1) Device control block (UCB)

(2) Controller control block (CUCB)

(3) channel control block (CCB)

(4) System Equipment table (SDT)

47. Once a channel procedure is started, it continues until it is finalized. There is no interruption until it is complete. So IO Scheduling cannot use the time slice rotation method.

48. device allocation steps for single-channel IO Systems:

(1) Allocation device: According to process N proposed process name (from the logical device name to the physical device name), to retrieve SDT, from which to find the physical device UCB, according to UCB The state of the device, I know the busy idle. If busy, the process that requires IO is inserted into the wait queue waiting for the device. If you are not busy, you can assign.

(2) Assigning the controller: When the system assigns the device to the process of the required IO , the controller table pointer from the UCB is found to the controller table attached to the device CUCB , and then check the status information for the table. If busy, the process is inserted into the queue waiting for the controller, and if not busy, the controller is assigned to the process.

(3) Allocation channel: Through CUCB find the channel table connected to this controller, from which to understand the status information of this channel. If busy, the process is inserted into the channel to wait, and if not busy, the channel is assigned to the process.

(4) Therefore, the process requested by the IO device, controller, channel has been obtained, it can be transmitted information.


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Summary of basic concepts of operating system learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.