Zero Copy Technology in Linux, part 1

Source: Internet
Author: User

Overview
Huang xiaochen, software engineer, IBM
Feng Rui, software engineer, IBM

Introduction:
This series is composed of two articles, introducing several zero copy technologies for Linux operating systems. It briefly describes the implementation of various zero copy technologies and their features and applicable scenarios. This article is the first part of this series. It mainly introduces some background knowledge about the zero copy technology, briefly introduces why the zero copy technology is required for Linux, and what kind of zero copy technology is available in Linux.

Introduction
The standard I/O interfaces of traditional Linux operating systems are based on data copy operations, that is, I/O operations may cause data to be transmitted between the buffer zone of the operating system kernel address space and the buffer zone defined by the application address space. The biggest advantage of doing so is to reduce disk I/O operations, because if the requested data is already stored in the operating system's high-speed buffer storage, the actual physical disk I/O operations are not required. However, the data copy operation during data transmission results in a significant CPU overhead, which limits the ability of the operating system to effectively perform data transmission operations.
The zero-copy technique can effectively improve the performance of data transmission. When the kernel driver (such as the network stack or disk storage driver) processes I/O data, the zero copy technology can reduce or even completely avoid unnecessary CPU data copy operations. The modern CPU and storage architecture provides many features that can effectively implement the zero-Copy technology. However, because the Storage Architecture is very complex, and the network protocol stack sometimes needs to process necessary data, therefore, the zero copy technology may have a lot of negative effects, and may even completely lose the advantages of the zero copy technology.

Why do we need the zero copy technology?
Nowadays, many network servers are based on the client-server model. In this model, the client requests data or services from the server. The server must respond to the requests sent by the client and provide the required data to the client. With the increasing popularity of network services, video applications have developed rapidly. Today's computer systems have sufficient capacity to handle the heavy load on the client caused by applications such as video, but for the server side, it is too slow to cope with the network traffic caused by applications such as video. Moreover, as the number of clients increases rapidly, the server can easily become a performance bottleneck. For heavy-load servers, operating systems are usually the culprit of performance bottlenecks. For example, when the system calls the data "write" operation or the data "send" operation, the operating system usually copies data from the buffer zone of the application address space to the buffer zone of the operating system kernel. The advantage of the operating system is that the interface is simple, but the system performance is greatly reduced, because this data copy operation not only needs to occupy
CPU time slice also requires additional memory bandwidth.
Generally, a client sends a request to the server through a network interface card. The operating system sends the request to the server application. The server application processes the request. After the request is processed, the operating system also needs to pass the processing result back through the network adapter.
The following section will briefly introduce how traditional servers transmit data, and what problems exist in the data transmission processing process that may cause performance loss on servers.
Data transmission process of traditional servers in Linux
In Linux, traditional I/O operations are a kind of buffer I/O. The data transmitted during the I/O process usually needs to be copied multiple times in the buffer zone. In general, when transmitting data, your application needs to allocate a buffer of the appropriate size to store the data to be transmitted. The application reads a piece of data from the file, and then sends the data to the receiving end through the network. The user application can complete the data transmission operation by calling the read () and write () systems, the application is not aware of the data copy operations performed by the operating system during the data transmission process. For Linux, the kernel of the operating system performs multiple copy operations during data transmission based on data sorting or verification. In some cases, these data copy operations greatly reduce the performance of data transmission.
When an application needs to access a piece of data, the operating system kernel first checks whether the data is stored in the buffer zone of the operating system kernel address space because of the previous access to the same file, if this data cannot be found in the kernel buffer, the Linux kernel will first read this data from the disk and put it in the buffer zone of the operating system kernel. If the data read operation is completed by DMA, the CPU only needs to manage the buffer and create and process the DMA during the DMA data read process. In addition, the CPU does not need to do more. After the DMA completes the data read operation, the operating system will be notified to perform further processing. In Linux
The read () system calls the address of the specified application address space and stores this data in the address space of the application requesting this data. In the next process, the operating system needs to copy data from the buffer of the user application address space to the kernel buffer related to the network stack. This process also requires CPU usage. After the data copy operation is completed, the data is packaged and then sent to the network interface card. During data transmission, the application can first return and perform other operations. Then, when calling the write () system call, the data content in the user application buffer can be safely discarded or changed, because the operating system has reserved a copy of data in the kernel buffer, when the data is successfully transferred to the hardware, the copy of the data can be discarded.

From the above description, we can see that in this traditional data transmission process, data is copied at least four times, even if DMA is used to communicate with hardware, the CPU still needs to access data twice. In the process of reading data, the data does not directly come from the hard disk, but must first pass through the file system layer of the operating system. In the write () Data writing process, in order to be consistent with the size of the data packet to be transmitted, the data must be split into blocks first, and the packet header must be considered in advance, data validation is also required.

Figure 1. Traditional data transmission using read and write Systems

Zero Copy Technology Overview
What is zero copy?

To put it simply, zero copy is a technology that prevents the CPU from copying data from one block of storage to another. The zero copy technology, which is applicable to device drivers, file systems, and network protocol stacks in operating systems, greatly improves the performance of specific applications, in addition, these applications can use system resources more effectively. This performance improvement is achieved by allowing the CPU to execute other tasks while copying data. The zero copy technology can reduce the number of data copies and shared bus operations, eliminate unnecessary intermediate copies of transmitted data between the storage, and thus effectively improve data transmission efficiency. Moreover, the zero copy technology reduces the overhead caused by context switching between the user's application address space and the operating system kernel address space. A large number of data copy operations are actually a simple task. From the operating system perspective, if
If the CPU is always used to execute this simple task, it will be a waste of resources. If there are other simple system components, this frees the CPU from doing other things, so the use of system resources will be more effective. To sum up, the goal of the zero copy technology can be summarized as follows:
Avoid data copying

  • Avoid copying data between operating system kernel buffers.
  • Avoid Data Copying between the operating system kernel and the user application address space.
  • Your applications can directly access hardware storage without the operating system.
  • Try to use DMA for data transmission.

Combine multiple operations

  • Avoid unnecessary system calls and context switching.
  • The data to be copied can be cached first.
  • Process data with hardware as much as possible.

As mentioned above, the zero copy technology is very important for high-speed networks. This is because the network connection capability of the high-speed network is close to that of the CPU, and even exceeds the processing capability of the CPU. If this is the case, the CPU may have to spend almost all the time copying the data to be transmitted without the ability to do other things, which leads to a performance bottleneck, this limits the communication rate and reduces the network connection capability. Generally, one-bit data can be processed in a CPU clock period. For example, a 1 GHz processor can perform a traditional data copy operation on the network link of 1 Gbit/s, but if it is a 10 Gbit/s network, for the same processor, the zero copy technology becomes very important. For
1 Gbit/s network connection, the zero copy technology is applied in supercomputer clusters and large commercial data centers. However, with the development of information technology, 1 Gbit/s, 10 Gbit/s, and 100 Gbit/s networks will become more and more popular, so the zero copy technology will become more and more popular, this is because the processing capability of network connections is much faster than that of CPU. Traditional data copying is limited by traditional operating systems or communication protocols, which limits Data Transmission Performance. The zero-Copy Technology reduces the number of data copies, simplifies the protocol processing layer, and provides faster data transmission methods between applications and networks, which can effectively reduce communication latency and improve network throughput. The zero copy technique is one of the main technologies that implement high-speed network interfaces for hosts, routers, and other devices.
The modern CPU and storage architecture provides many related functions to reduce or avoid unnecessary CPU data copy operations during I/O operations. However, this advantage of the CPU and Storage Architecture is often overestimated. The complexity of the Storage Architecture and the required data transmission in the network protocol may cause problems, and sometimes even lead to the complete loss of the advantages of the zero copy technology. In the next chapter, we will introduce the zero copy Technology in Several Linux operating systems, briefly describe their implementation methods, and analyze their weaknesses.
Zero Copy technology classification
The development of the zero copy technology is very diverse, and there are many existing zero copy technologies. However, there is no zero copy technology suitable for all scenarios. For Linux, there are many existing zero copy technologies. Most of these zero copy technologies exist in different Linux kernel versions, some old technologies have been greatly developed in different Linux kernel versions or are gradually replaced by new technologies. This article divides these non-copy technologies into different scenarios. In summary, the zero copy Technology in Linux mainly includes the following:

  • Direct I/O: for this data transmission method, applications can directly access the hardware storage, and the operating system kernel only supports data transmission: this type of zero copy technology is applicable to situations where the operating system kernel does not need to directly process data. data can be directly transmitted between the buffer zone of the application address space and the disk, no page cache support provided by the Linux operating system kernel is required.
  • During data transmission, data is not copied between the buffer zone of the operating system kernel address space and the buffer zone of the user application address space. Sometimes, the application does not need to access the data during data transmission. Therefore, copying data from the Linux page cache to the user process buffer can be completely avoided, the transmitted data can be processed in the page cache. In some special cases, this zero copy technique can achieve better performance. In Linux, similar system calls include MMAP (), sendfile (), and splice ().
  • The data transmission process between the page cache of Linux and the buffer of user processes is optimized. The zero copy technique focuses on flexible data copying between the user process buffer and the operating system page cache. This method continues the traditional communication mode, but is more flexible. In Linux, this method mainly uses the write-time replication technology.

The first two methods aim to avoid the buffer copy operation between the application address space and the operating system kernel address space. These two types of zero copy technology are generally applicable in some special cases, such as the data to be transferred does not need to be processed by the operating system kernel or applications. The third method inherits the concept of data transmission between the traditional application address space and the operating system kernel address space, and then optimizes the data transmission itself. We know that data transmission between hardware and software can be performed by using DMA, and almost no CPU is involved in the DMA data transmission process, in this way, you can free up the CPU to do more things, but when the data needs to be in the buffer zone of the user address space and
When the page cache of the Linux operating system kernel is transmitted between pages, there is no DMA-like tool available, and the CPU needs to participate in this data copy operation throughout the process, therefore, the third method aims to effectively improve the data transfer efficiency between the user address space and the operating system kernel address space.

Summary
This series of articles introduces the zero copy Technology in Linux. This article is the first part of it, introducing the basic concepts of the zero copy technology, why does Linux require the zero copy technology and briefly introduces some basic background knowledge about the zero copy Technology in Linux. In the second part of this series, we will introduce in detail the zero copy technology mentioned in this article in Linux.

Original article: http://www.ibm.com/developerworks/cn/linux/l-cn-zerocopy1/index.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.