Understanding from the database: Random I/O & amp; sequential I/O

Source: Internet
Author: User
Before talking about these two concepts, let's talk about big IOvs. Small IO. Generally, we regard 16 kb io as small IO, while 32 kb io as large IO to understand IO.

Before talking about these two concepts, let's talk about big I/O. small I/O we usually think of 16 kb I/O as small I/O and 32 kb I/O as large I/O.

InBefore talking about these two concepts, let's talk about big I/O vs. Small I/O.
Normally <= 16 KBI/O > = 32KBI/O
Measure the test taker's knowledge about the size of I/O and the performance of cache, RAID, and LUN attributes.

Currently, most databases use traditional mechanical disks.
Therefore, the entire system design should be as sequential as possible I/O
Avoid overhead of expensive seek time and rotation Delay
Random small I/O consumes more processing resources than sequential large I/O
Random small I/O is more concerned with the number of I/O processed by the system, that is, IOPS, for example, OLTP
Large sequential I/O is more concerned with bandwidth, that is, Mbit/s, such as, OLAP
Therefore, if the system carries a variety of different applications
You must understand their respective needs, whether they require IOPS or bandwidth.

The biggest problem with traditional mechanical disks is the read/write head.

The existence of the read/write head allows the disk to either sequential I/O or random I/O

However, random I/O requires expensive head rotation and positioning.

Therefore, sequential IO access is much faster than random IO access.

Many designs of databases are designed to make full use of sequential IO. For example, Oracle redo log writing is sequential IO.


If the database server uses both sequential and random I/O, and random I/O, the maximum benefit will be from the cache.
There are 3 reasons:
① Sequence I/O generally only needs to scan data once, so caching is of little use to it
② Sequential I/O is faster than random I/O
③ Random I/O usually only needs to find specific rows, but the I/O granularity is page-level, and most of them is a waste.
However, the data read by sequential I/O usually occurs on all rows on the desired data block.
More cost-effective
Therefore, caching random I/O can save more workload

The traditional database architecture has little competition for random I/O, and random I/O makes almost all DBAs talk about it.
However, smart MySQL InnoDB converts random I/O into sequential I/O using transaction logs.
Think, if you can afford it, Adding memory is the best solution to random I/O.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.