Java NIO One I/O basic concept

Source: Internet
Author: User

Buffer operation

buffers, and how buffers work, are the basis for all I/O. The so-called "input/output" is all about moving data
into or out of the buffer. The process performs the I/O operation, which boils down to making a request to the operating system to either drain the data in the buffer
(write), or fill the buffer with data (read).

the process uses the read () system call, which requires its buffer to be filled. The kernel then issues a command to the disk control hardware that requires it to read data from disk. The disk controller writes data directly to the kernel memory buffer, which is done through DMA, without the need for primary CPU assistance. Once the disk control
Buffer, the kernel copies the data from the temporary buffer in the kernel space to the slow-flushing area specified by the process when the read () call is made .

Note The concept of user space and kernel space. User space is the region where the regular process is located. The JVM is a regular process,Stationed in user space. User space is a non-privileged zone: for example, code executed in the zone cannot directly access the hardware device.Kernel space is the region in which the operating system resides. The kernel code has special powers: it communicates with the device controller, controls the user areaThe running state of the process, and so on. Most importantly, all I/O is directly (as described here) or indirectly throughOver the kernel space.When a process requests an I/O operation, it executes a system call (sometimes called a trap) that transfers control to the kernel.
The underlying function known to C + + programmersOpen ()Read ()Write () and close () to do is to create and execute the appropriate system calls. When the kernel is called in this way, it takes any necessary steps to find the data required by the process and transmits the data to the specified buffer within the user space. The kernel tries to cache or pre-read the data, so the data required for the process may already be in the kernel space. If so, the data simply needs to be copied. If the data is not in kernel space, the process is suspended and the kernel begins to read the data into memory.
you might find it redundant to copy data from the kernel space to the user space. Why not just let the disk controller send the data to the buffer in the user space? There are a few problems with this. First, hardware typically does not have direct access to user space . Second, a block-based hardware device such as a disk operates on a fixed-size block of data, and the user process may ask for any size or non-aligned block of data. In the process of data transactions between user space and storage devices, the kernel is responsible for the decomposition and re-grouping of data, thus acting as an intermediary.

Many operating systems can make the assembly/decomposition process more efficient. Based on the concept of divergence/aggregation, a process can pass a series of buffer addresses to the operating system with a single system call. The kernel can then populate or drain multiple buffers sequentially, and when it reads, it radiates data to multiple user-space buffers, and then gathers the data from multiple buffers when it is written so that the user process does not have to execute the system calls more than once (which can be costly), and the kernel optimizes the processing of the data. process, as it has mastered all the information about the data being transmitted. If the system is equipped with multiple CPUs, you can even fill or drain multiple buffers at the same time .
Virtual Memory

virtual memory is used by all modern operating systems. Virtual memory means replacing physical (hardware RAM) memory addresses with false (or virtual) addresses. This is a lot of good, summed up can be divided into two categories:
1. More than one virtual address can point to the same physical memory address.
2. The virtual memory space can be larger than the actual available hardware memory.
As mentioned in the previous section, the device controller cannot be directly stored in the user space through DMA, but the same effect can be achieved by taking advantage of the first item mentioned above. Maps the kernel-space address to the virtual address of the user space to the same physical address, so that theDMA hardware (which only accesses the physical memory address) can populate the buffer that is visible to both the kernel and the user-space process .

This is great, eliminating the need for a copy of the kernel and user space, but only if the kernel and user buffers must beWith the same page alignment, the size of the buffer must also be multiple of the disk controller block size (typically 512-byte disk sector)Number. The operating system divides the memory address space into pages, which are fixed-size byte groups. The size of the memory page is always the size of the disk block.Number, typically 2 power (this simplifies the addressing operation). Typical memory pages are 1,024, 2,048, and 4,096 bytes. Virtual and    memory page scheduling
     in order to support the second feature of virtual memory (the addressing space is greater than the physical memory), Virtual memory paging (often can continue to exist on the external disk storage, which frees up space for other virtual pages in physical memory. In essence, the object
& nbsp;    the memory page size to a multiple of the disk block size, so that the kernel can directly to the disk control hardware release commands, the memory

< Span style= "FONT-SIZE:11PT;" >< Span style= "FONT-SIZE:11PT;" >< Span style= "FONT-SIZE:11PT;" >     modern CPUs contain a memory management unit ( MMU) subsystem, logically located between the CPU and the physical memory of the is responsible for determining the address of the page (often implemented by shifting or masking the address value) and converting the virtual page number to object page number (this step is done by hardware, extremely fast). If there is currently no physical memory that forms a valid mapping with the virtual page

     page fault generates a trap (similar to a system call), handing control over to the kernel, with the virtual ground memory. This often leads to other pages being moved out of physical memory, so that the new page gets to the place. In this scenario, if the page that you want to move out of It's been touched. (The content has changed since it was created or last paged in), and you must first perform page recall and copy the page contents to the paging area on disk.
If the requested address is not a valid virtual memory address (any one of the memory segments that are not part of the executing process), the page cannot be validated and the segment error is generated. As a result, control is handed over to another part of the kernel, often resulting in the process being forced to close. once the error page passes validation, the MMU is updated to establish a new virtual-to-physical mapping (if necessary, break the mapping of the page being moved out), and the user process continues. The user process that caused the page error will not be aware of this, and everything will be done unconsciously.

file I/O
File I/O belongs to the file system category, and the file system is very different from disk. The disk has data on the sector, usually a sector512 bytes. A disk is a hardware device that knows nothing about a file, it simply provides a series of data access windows. At this point, disk sectors are quite similar to memory pages: they are all uniform sizes and can be accessed as large arrays.A file system is a higher level of abstraction, a unique way of arranging and interpreting data from disk (or other random access block devices)Expression The code you write almost invariably deals with the file system, rather than dealing directly with the disk. Is the file system definitionAbstract concepts such as file names, paths, files, and file attributes.
     the previous section, all I/O is done by requesting page scheduling. You should remember that page scheduling is a very low-level operation, only occurs directly between the disk sector and the memory page. and file I/O can be arbitrary size, arbitrary positioning. That      file system organizes a series of data blocks that are consistent in size. Some blocks store meta information, such as free blocks, directories, indexes When was the last update, and so on. zone. The modern operating system using paging technology uses the request page scheduling to obtain the required data. varies from 2,048 to 8,192 bytes and is always a multiple of the base memory page size. /span>

Determines which pages of the file system the requested data is distributed on (disk sector groups). File content and meta-number on disk
It is possible to span multiple file system pages, and these pages may not be contiguous.
allocate a sufficient number of memory pages in the kernel space to accommodate the file system pages that are determined.
establishes a mapping between a memory page and a file system page on disk.
A page fault is generated for each memory page.
The virtual memory system captures page faults, arranges page paging, reads page contents from disk, and makes the page valid.
Once the paging operation is complete, the file system parses the original data and obtains the required file content or attribute information.

It is important to note that these file system data are also cached in the same way as other memory pages. For subsequent I/O requests, some or all of the file data may still be in physical memory and can be reused without being read from disk. most operating systems assume that the process continues to read the remainder of the file, thus pre-reading the additional file system pages. If memory contention is not severe, these file system pages may continue to be valid for a considerable period of time. This way, when the file is later opened again by the same or different process, it may not be necessary to access the disk at all. You may also have encountered this situation: when performing similar operations repeatedly, such as string retrieval in several files, the second time seems to run much faster. Similar steps are used when writing file data. At this point, changes to the contents of the file (via write ()) will cause the file to become dirty and then be paged out to synchronize with the contents of the file on the disk. The file is created by mapping the file to a free file system page, and then flushing the file system page to disk in the subsequent write operation.

Java NIO One I/O basic concept

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.