Fio Use Guide

Source: Internet
Author: User

This document is a translation of the fio-2.0.9 howto document, Fio too many parameters, the translation of this document does not test the function of each parameter and the use of methods, only a small number of parameters have been tested, most of the parameters are based on literal translation or personal understanding of the translation, there must be some discrepancy, first sent out , and then be recharged and modified later when it is used. It may be clearer to analyze an instance of Fio from another document.

Fio This tool is too powerful to list his NB.

1 support more than 10 kinds of storage engine, can customize

2 self-made drawing tool, call Gnuplot do diagram

3 support for almost all storage description parameters

4 configuration of CPU, memory, process/thread, file, IO features

5) compression, trace playback, ... These are all included, flexible configuration

Brief Introduction

Fio was originally used to save time for writing specialized test programs for specific workloads, for performance testing, or for finding and reproducing bugs. Writing such a test application is a waste of time. So a tool is needed to simulate a given IO load without having to write one or another specific test program over and over again. But test load is hard to define. Because many processes or threads may be produced, each of them generates IO in their own way. Fio need to be flexible enough to simulate these case.

A typical FIO work process

1 Write a job file to describe the true IO load to visit. A job file can control the generation of any number of threads and files. A typical job file has a global segment (defining shared parameters), one or more job segments (describing the job to be generated).

2 when running, Fio reads these parameters from the file, does the processing, and according to these parameters describes, launches these visit true threads/processes

Run Fio

Operation mode:

$fio Job_file

It will run according to the contents of the Job_file. You can specify multiple job File,fio in the command line to run these files in serialization. It is equivalent to using the Stonewall parameter between different section of the same job file.

If a job file contains only one job, parameters can be given on the command line to run directly and no longer need to read job file. The command-line arguments are the same as the format of the job file parameter. For example, the parameter iodepth=2 in job file can be written as –iodepth 2 or –iodepth=2 on the command line.

FIO does not need to use root to branch, unless the file or device being used requires root permission. Some options may be restricted, such as memory locks, IO Scheduler switching, or nice value demotion.

Job file Format

The job file format takes a classic INI file, and the value in [] indicates job name and can take any ASCII character, except ' global ', which has special meaning. The Global section describes the default configuration values for each job in job file. A job section can overwrite the parameters in the global sections, and a job file can contain several global ones. A job is only affected by the global chapter above it. ‘;’ And ' # ' can be used as annotations

Two processes, from a randomly read job file from a 128MB file.

; –start Job file–

[Global]

Rw=randread

size=128m

[JOB1]

[JOB2]

; –end Job file–

The JOB1 and Job2 section are empty because all of the descriptive parameters are shared. Without the filename= option, FIO creates a filename for each job, and if it is written by command:

$fio –NAME=GLOBAL–RW=RANDREAD–SIZE=128M–NAME=JOB1–NAME=JOB2

Instances of random write files for multiple processes

; –start Job file–

[Random-writers]

Ioengine=libaio

Iodepth=4

Rw=randwrite

bs=32k

Direct=0

size=64m

Numjobs=4

; –end Job file–

There is no global section, only one job section.

Description of the previous example: using async, each file has a queue length of 4, using random write, using 32k block, using non-direct IO, a total of 4 processes, each process randomly write 64M files. You can also use the following command

$fio –name=random-writers–ioengine=libaio–iodepth=4–rw=randwrite–bs=32k–direct=0–size=64m–numjobs=4

Environment Variables

Supports environment variable extensions in job file. Similar to ${varname} can be the value of an option (on the right of = number).

Instance:

$SIZE =64m numjobs=4 Fio Jobfile,fio

; –start Job files–

[Random-writers]

Rw=randwrite

Size=${size}

Numjobs=${numjobs}

; –end Job file–

will be expanded to

; –start Job file–

[Random-writers]

Rw=randwrite

size=64m

Numjobs=4

; –end Job file–

Keep keywords

FIO has some reserved keywords that are internally replaced with the appropriate values, these keywords are:

$pagesize the current system's page size

$MB The total memory size of the _memory system, in megabytes

Number of CPUs available $ncpus online

This can be used both on the command line and in job file, and is automatically replaced with the 徝 of the current system when the job is running. Support for simple mathematical calculations, such as:

size=8* $MB _memory

type

STR string

Time (int)

int integer

bool

Irange Integer Range

Float_list Character Point series

The basic parameters that a job contains

1 IO Type

The IO type initiated to the file.

<1>readwrite=str,rw=str

Read Order Reading

Write sequence writes

Randwrite Random Write

Randread Random Read

Rw,readwrite sequential mixed Read and write

RANDRW random mixed reading and writing

[Parameter notes]

For mixed IO types, the 50% read and 50% write, and for a specific IO type, the result may be slightly skewed because the speed may be different.

By adding ":<nr>" after str, you can configure the number of IO to perform before performing a fetch of the offset operation. For a random read, it would lik ' rw=randread:8′for passing to offset modifier with a value of 8. If the suffix is used for sequential IO types, then the After Io, add this value to the resulting offset. E.g rw=write:4k will skip 4K after each write. It converts sequential IO into sequential io with holes. Refer to the ' rw_sequencer ' option.

<2>rw_sequencer=str

This option controls the number <nr> how to modify the resulting IO offset if there is offset modification after rw=<str>. The values that can be received are:

Sequential produce order of offset

Identical produces the same offset

[Parameter notes]

' Sequential ' is for random io only. Typically, FIO will generate a new random io after each IO. E.g.rw=randread:8, the seek will be executed after every 8 Io, not after each IO. Sequential Io is already in order and then set to ' sequential ' will not produce any difference. ' Identical ' produces a similar behavior to ' sequential ', except that it produces 8 consecutive times the same offset and then generates a new offset.

2) Block Size

The size of the IO unit produced can be an isolated value, or it can be a range.

<1>blocksize=int,bs=int

The block size for single IO, which defaults to 4k. If it is a single value, it will take effect for both read and write. If it is a comma, then an int value, it is only valid for write. In other words, the format can be Bs=read_end_write or bs=read,write. e.g. bs=4k,8k read blocks with 4k, write using 8k blocks. The e.g.bs=,8k will make the write 8k block and read with the default value.

3 IO Size

How much data will be read/written

<1>size=int

The total size of the data to be transferred by this job IO. FIO will perform all data transfer to completion unless the runtime (' Runtime ' option) is set. Unless a specific ' nrfiles ' option and the ' filesize ' option are set, FIO will split the size in the file of the job definition. If this value is not set, FIO will use the total size of the file or device. If these files do not exist, the size option must be given. You can also give a percentage of 1 to 100. e.g. SIZE=20%,FIO will use 20% of the space for a given file or device.

4 IO Engine

The way to initiate IO.

<1>ioengine=str

Define how the job initiates io to the file

Sync basic Read,write.lseek used for positioning

Psync Basic Pread,pwrite.

VSync Basic Readv,writev.

Libaio Linux Proprietary asynchronous IO. Linux supports only buffered IO queue behavior.

Posixaio glibc POSIX asynchronous IO

Solarisaio Solaris-Unique asynchronous IO

Windowsaio Windows-Unique asynchronous IO

Mmap files are mapped to user space through memory and write and read data using memcpy

Splice uses splice and vmsplice to transfer data between user space and the kernel

SYSLET-RW uses Syslet system modulation to construct a common read/write asynchronous IO

sg SCSI generic SG v3 IO. Can be synchronized using SG_IO IOCTL, or the target is a SG character device, we use read and write to perform asynchronous IO

Null does not transmit any data, just disguised as such. It is mainly used for training the use of FIO, or for basic debug/test purposes.

NET transmits data over the network based on a given host:port. Depending on the specific protocol, hostname,port,listen,filename these options will be used to indicate which connection is established, and the Protocol option will determine which protocol is being used.

Netsplice like NET, but uses Splic/vmsplice to map data and send/Receive data.

Cpuio does not transmit any data, but consumes CPU cycles according to the cpuload= and cpucycle= options. e.g. Cpuload=85 will use the job without doing any actual IO, but take up to 85% CPU cycles. On an SMP machine, use numjobs=<no_of_cpu> to get the CPU needed because cpuload only loads a single CPU and then consumes the required proportions.

Guasi Guasi IO engine is a generic user-space asynchronous system call interface for asynchronous IO

RDMA RDMA I/O engine supports RDMA memory semantics (rdma_write/rdma_read) and channel Doctrine (SEND/RECV) for Infiniband,roce and IWARP protocols

External indicates that an external IO engine (binary file) is to be invoked. e.g. IOENGINE=EXTERNAL:/TMP/FOO.O will load the IO engine in/tmp FOO.O

5) IO Depth

If the IO engine is asynchronous, this specifies the depth of the queue we need to maintain

<1>iodep

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.