Computer Input/Output System

Source: Internet
Author: User
Computer Input and Output systems are excerpted from "Operating System essence and design principles"

By William Stallings

Translated by Liu Jianwen (http://blog.csdn.net/keminlau

)

Key: function logic hierarchy function Step-by-Step System

Intro

The I/O subsystem may be the most troublesome part of the operating system design. Because the hardware devices and applications are diverse, it is difficult to design a general and unified solution for them. This section describes the content of the I/O subsystem in a general to special order.

  • First, start with the most general I/O structure, that is, the relevant I/O organization and functions in the computer architecture;
  • The next step is the implementation of the I/O system in the operating system, the content includes the I/O design goals, the structured way of I/O functions, and detailed I/O cache functions (the cache function is the basic I/O service provided by the operating system, to improve the overall I/O performance.
  • In the next section, we will talk about more specific disk I/O. In modern computers, disk I/O plays an important role in the overall system performance (first, many applications are currently concentrated in data processing; second, multiple program functions require quick external storage assistance ).
I/O device

There are many types of I/O devices, one of which is divided into three categories: human-readable devices, machine-readable devices, and communication devices.

  1. Human-readable devices: these devices are intended for users and interact with users, such as printers, monitors, keyboards, and terminals.
  2. Machine-Readable devices: these devices are machine-oriented, with no user intervention during interaction, such as disk drives, sensors, and device controllers (controllers are part of devices, however, it can also be used as a device, such as many chips on the motherboard that implement many controllers.
  3. Communication device: this device is also machine-oriented. No user intervention is involved in interaction, but the interaction object is not local. Like Nic and cat.

These three types of devices are not only very different from each other, the differences between various devices in the same category are also very large (so this classification is obviously preliminary in the design of the I/O system). The key points of the differences between them are as follows:

  • Data Rate: the data transmission rate between devices may be several orders of magnitude different;
  • Application: the differences in the usage of devices also affect the different management policies implemented by other supporting software and operating systems for these devices. For example, to allow disk devices to store data in files, the operating system must configure file management software (such as a file system) for disk devices. To enable disk devices to use virtual memory, the operating system and CPU must be combined to provide some virtual memory hardware and software facilities. Here is another example of a terminal device. End users may be average users or administrators. This means that the operating system must implement different software policies with different privilege levels and Priorities for the terminal device.
  • Complexity of controls: the control interfaces of different devices are certainly different. For example, a printer only needs simple control interfaces, but for disks, the control interfaces are complex. As a result of the complexity of different control interfaces, the operating system must hardware part of the I/O control into a controller called the I/O module, the I/O module is responsible for device-specific control rules at one end of the device, and the other end provides consistent interfaces for the operating system (driver) control.
  • Unit of transmission: the data transmitted by the device may be a bytes stream or a large data block.
  • Data Representation: different devices use different encoding schemes, including character sets and parity rules.
  • Error conditions: the properties, reporting methods, and handling methods of Errors generated by devices are different.

From the perspective of operating systems and user processes, it is very difficult to design a unified I/O system for such diverse devices.

Changes in I/O functions

Computer technology is changing with each passing day, and the evolution of computer systems and independent components shows a certain pattern. I/O functions have evolved most significantly. The following is an overview of the evolution of I/O functions:

  • 1. The processor directly controls external devices. This I/O function can be found in some micro-processing control devices.

    P.s. In this way, I/O is like a CPU register or memory read/write. However, these bytes have the I/O operation semantics.

  • 2. Add a device controller (or I/O module ). The processor is connected to the device by using the programming I/O Method (also called cyclic testing or polling. This I/O Method disconnects the processor from the specific device features and uses a more consistent approach for input and output.

    Where is p.s. consistent? The Controller generally has status registers to identify the current status of the device, such as whether the device is ready, whether the data to be input is in the data register, and whether errors occur. The CPU must read the Status Register to determine the device status before performing I/O operations. This is a scan-type synchronization operation. In this way, the I/O port has at least three operation semantics: input, output, and status.

  • 3. Introduce the interrupt mechanism based on previous evolution. The interrupt mechanism saves the CPU from waiting for Io operations, improving the efficiency.
  • 4. the I/O module uses DMA technology to directly control the memory. In this way, in addition to the start and end of data transmission, the input and output of the device do not require CPU intervention.
  • 5. the I/O module (originally a controller) is upgraded to an independent processor and uses commands designed for I/O functions. This type of I/O with independent processors is called the I/O channel ). The CPU of the central processor instructs the I/O processor to execute the I/O program in the primary memory. The I/O processor reads and executes these commands without CPU intervention. In this way, the central processor is further away from data input and output processing, focusing on highly computing tasks. (Kemin: There are two processors, one universal processor and one dedicated processor)
  • 6. the I/O channel has its own memory. In this step, the I/O system has become a dedicated computer with a head-and-tail switch. (Kemin: Are all previous I/O Modules finite state machines? Machines with limited computing power? If the answer is yes, there are many functions and different capacities in the computer to collaborate to complete computing tasks ?) I/O systems commonly used with dedicated processors, such as interactive terminals. The processor of the interactive terminal is responsible for controlling the terminal (Kemin: how to control it ?)
Direct Memory Access

Shows the most common logical structure of DMA. Like CPU, DMA is also a computer's logical unit (Component). It takes over the CPU to complete some system functions-control the system bus and transmit memory data. Generally, the DMA module must only work when the CPU does not need to use the bus, or force the CPU to temporarily suspend the operation and then work. The latter is called periodic theft or periodic misappropriation.

The DMA technology works like this. When the CPU needs to read or write a piece of data, it sends a command to the DMA module, the content is as follows:

  • The read or write control line between the CPU and the DMA module );
  • The address of the I/O device to be transmitted is transmitted through the data line;
  • The memory start address to be read/written. It is transmitted through the data line and saved to the address register of DMA;
  • The data volume to be read and written is transmitted through the data line and saved to the Data Count register of DMA;

After the complete DMA command is issued, the CPU continues to work in parallel with the special processor of the DMA controller. The DMA Controller is responsible for transmitting large pieces of data (one word at a time) in and out of the memory without CPU interference. After the transmission is complete, the DMA sends an interrupt signal to the CPU to indicate that the transmission is complete. Therefore, during the entire data transmission process, the CPU only involves the start and end processing of the transmission.

Shows where the CPU is suspended during the instruction period. The CPU is suspended when trying to use the bus occupied by DMA. DMA then returns the control to the CPU after one word is transferred. Note: This process is not interrupted. The CPU does not keep the context information or perform other actions. The CPU is paused for a bus cycle. The overall effect is that the CPU is running slowly (Kemin: because it is a shared bus, it may cost a little ). However, DMA is much more effective than interrupt mode or programming I/O when large data blocks are transmitted.

The DMA mechanism can be configured as needed (configured) into a variety of different modes. Shows several possible modes. In the first mode, all modules of the Computer share a single bus. The DMA module acts as a proxy and is responsible for exchanging data between memory and I/O modules through programming I/O. This configuration mode is obviously very economical and inefficient. Because programming I/O is used, each word transmitted consumes two bus cycles.

The general method to improve efficiency is to convert general things into special ones. Therefore, the bus cycle required for transmission can be greatly reduced by integrating DMA into the I/O module. In the second configuration mode, the DMA module has a separate non-system bus path (PATH) with one or more I/O modules ). The DMA logic is a part of the I/O module, or an independent module, but is responsible for controlling part of the I/O module. This idea can be pushed forward to link all I/O modules with a separate I/O bus to the DMA module, depending on the third configuration mode. By sharing the I/O bus, you can reduce the interface connecting to the DMA to provide an easy-to-extend configuration method (Kemin: Let's look at the original article, but I'm not talking about how to configure it ). In the latter two configuration modes, the system bus shared by DMA and CPU and memory is only used to transmit control signals from CPU to DMA and data information between DMA and memory, the data transmission between the DMA and I/O modules is completely removed from the system bus.

Operating system design goals

I/O systems are designed with two goals: efficiency and generality ). The importance of efficiency lies in that I/O operations have become the biggest bottleneck in computer performance.

The first figure in this article shows that the input and output devices are far slower than the memory and processor. One solution is to introduce multiple programs so that some processes can continue to execute while waiting for Io operations. Although the current machine memory is already quite large, I/O operations often cannot keep up with the processor. To keep the processor busy, the swapping mechanism is introduced to transfer an extra ready process from the disk to the memory for execution. However, this action itself is an I/O operation. Therefore, the main purpose of the I/O system design is to improve efficiency. Due to the importance of disk I/O, this topic is also the most important. This chapter mainly studies the efficiency of disk I/O.

Another major goal is universality. In order to pursue simplicity and reduce input and output errors, people want to process input and output in a consistent way. Including user process-oriented interfaces and operating system management interfaces. Because of device differences (diversity), it is difficult to achieve true consistency in practice. What we can do is to design the I/O function using a layered modular method ). This method hides the device I/O operation details into some low-level routines, while user processes and high-level operating systems regard the device as a series of common functions, such as read, write, open, close, lock, unlock.

Logical hierarchy of I/O functions

When discussing the system structure, we can see that the modern operating system is layered. The guiding principle of layering is that the layers of the operating system must be separated based on the functional complexity, time scale, and abstraction level. Layered thinking divides the operating system into several layers, each of which is responsible for implementing a subset of the operating system functions (Kemin: If it is layered, this subset of functions should not be horizontal, but should be vertical, isn't it? If the answer is yes, it cannot be a subset or a certain step of a function, right ?). The implementation of the features of the higher layer depends on the original functions provided by the lower layer, which package these original operations to provide the Implementation of the functions of the higher layer. Under ideal conditions, each layer uses interfaces for interoperability, and modifications at one layer do not affect the other layer. Layered thinking allows us to break down complicated big problems into multiple manageable small problems.

Generally, lower-level operations work within a shorter time range. For example, some parts of the operating system directly interact with the hardware, and the time range of these interactive operations is generally within one thousandth of a second, and the other end of the interaction, the interaction between the operating system and the user is within several seconds. This is an ideal place for Stratified thinking.

The concept of hierarchy is applied to an I/O system with a hierarchical structure. The detailed structure may vary with devices and applications. The figure shows the three most important logical layers: Local peripheral devices, communication ports, and file systems.

Let's analyze the simplest I/O example-a local peripheral device that communicates with streams or records streams:

Logical I/O layer: as the name suggests, the I/O provided by this layer is logical (including logical resources and logical operations), and is packaged for specific I/O data and control, provides simple interfaces for users, such as device identification and device operation commands such as open, close, read, and write.

Device I/O layer: Data from the logic layer (buffered characters and Records) and the operation request will be translated into the corresponding I/O command sequence, channel command (Channel commands) and controller command (Controller orders) at this layer ). The buffer technology is implemented at this layer to improve performance.

Scheduling and control layer (scheduling and Control): The queuing and scheduling of I/O operations by the operating system takes place in this layer, and also includes operation control. Therefore, interrupt processing and I/O status reports also occur at this layer. This layer is tHe bottommost layer in the software layer and the layer for actual interaction with hardware.

In the communication device example, the layered structure is similar to that of the peripheral device, except that the logical I/O is replaced with the communication architecture ). The communication architecture itself is also composed of a series of logic layers, such as the well-known layer-7 Open System Interconnection Structure (OSI ).

In the file system example, three layers are not in the hierarchy of the above two examples. They are:

Directory management layer: In this layer, the file name symbol is converted into an identifier, which references a file directly or indirectly through a file descriptor or index table. This layer also handles user-oriented file directory operations, such as adding, deleting, and restructuring.

File System layer: This layer processes the logical structure of files and user-oriented file operations, such as open, close, read, and write. File Access permission management is also implemented at this layer.

Physical organization layer: similar to converting a virtual memory address to a physical memory address through Segmentation and paging structure, logical reference to a file or record must also be converted to an external physical address through the physical track and sector structure of the file. The external storage space management and external storage buffer management functions are also implemented at this layer.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.