Fio

fio - Flexible IO tester
fio - Flexible IO tester (FIO) benchmarking and workload simulation tool

-

fio - Flexible IO Tester - http://git.kernel.dk/?p=fio.git

-

git.kernel.dk Git - http://git.kernel.dk/?p=fio.git
 * description: fio - Flexible IO Tester
 * owner: Jens Axboe

-

Jens Axboe - Wikipedia - http://en.wikipedia.org/wiki/Jens_Axboe
 * Jens Axboe is a Linux kernel hacker. He is the current Linux kernel maintainer of the block layer and other block devices, along with contributing the CFQ I/O scheduler, Noop scheduler, Deadline scheduler and splice (system call) IO architecture. Jens is also the author of the blktrace utility and kernel parts, which provides a way to trace every block IO activity in the Linux kernel. blktrace exists in 2.6.17 and later Linux kernels.
 * To facilitate his block layer work in the Linux kernel Jens created the Flexible IO tester (FIO) benchmarking and workload simulation tool. FIO is able to simulate various types of I/O loads, such as synchronous, asynchronous, mmap, etc., as well as specifying the number of threads or processes, read vs. write mix, and various other parameters. FIO was used to set the world record in April 2009 for the highest number of I/Os-per-second (IOPS) in a single system

Disambiguation
See also: Fusion-io (FIO)

Source
git clone git://git.kernel.dk/fio.git

or

git clone https://github.com/axboe/fio.git

Usage
fio --name=write_job --filename=/dev/nvme0n1 --iodepth=128 --bs=256k --rw=write --direct=1 --runtime=12h --time_based --ioengine=libaio
 * 1) 12 hour write test

Installation
fio - Flexible IO Tester - http://git.kernel.dk/?p=fio.git

Source method
Source build and install: sudo yum -y install gcc make wget git libaio-devel zlib-devel
 * 1) RHEL DEPENDENCIES
 * 1) may require # yum -y --enablerepo=rpmforge install git

sudo apt-get install gcc make wget git libaio-dev zlib1g-dev
 * 1) UBUNTU DEPENDENCIES


 * 1) MAKE SURE LIBAIO IS INCLUDED, OR PERFORMANCE WILL BE TERRIBLE!

mkdir -p ~/.src ; cd ~/.src git clone git://git.kernel.dk/fio.git cd fio git checkout fio-3.16 ./configure make clean make
 * 1) BUILD
 * 1) git checkout fio-2.0.9
 * 2) git checkout fio-2.16
 * 3) git checkout fio-3.0

cp fio /usr/local/bin sudo make install
 * 1) INSTALL
 * 1) OR this will include the man and fio_generate_plots page as well

RPMForge method
RPMForge fio install: yum -y --enablerepo=rpmforge install fio

Freshmeat method
fio utility - http://freshmeat.net/projects/fio
 * Compiling FIO Benchmark Tool (Linux) - SmarterTrack 4.9 - http://kb.fusionio.com/KB/a26/compiling-fio-benchmark-tool-linux.aspx

yum -y install gcc libaio-devel make wget mkdir -p ~/src ; cd ~/src wget "http://freshmeat.net/urls/3aa21b8c106cab742bf1f20d60629e3f" tar -zvxf fio-1.57.tar.gz cd fio-1.57 make

cp fio /usr/local/bin # OR make install

Windows Installation
Windows fio - flexible io tester - http://bluestop.org/fio/


 * 64bit installer - http://bluestop.org/files/fio/releases/fio-2.6-x64.msi
 * 64bit zip - http://bluestop.org/files/fio/releases/fio-2.6-x64.zip
 * 32bit installer - http://bluestop.org/files/fio/releases/fio-2.6-x86.msi
 * 32bit zip - http://bluestop.org/files/fio/releases/fio-2.6-x86.zip

"This site contains Windows binaries for fio. The main fio site, which conatins the latest code, is at http://git.kernel.dk/?p=fio.git."


 * FIO sources which were used to build the current installer can be found in fio.zip.
 * The Windows version of FIO uses Cygwin.
 * Sources for the Cygwin binaries which are used in the current installer can be found in cygwin_src.zip.
 * http://bluestop.org/fio/cygwin_src.zip

Fio is installed to: C:\Program Files\fio\fio.exe

Ported to Windows by Bruce Cran .

Execution: "C:\Program Files\fio\fio.exe" --name=job1 --filename=\\.\PhysicalDrive1 --size=100m --rw=write --thread

Windows threading sucks! fio: this platform does not support process shared mutexes, forcing use of threads. Use the 'thread' option to get rid of this warning.

Errors you will see when the drive fills: fio: io_u error on file \\.\PhysicalDrive1: Input/output error: read offset=1073737728, buflen=4096 fio: pid=7980, err=5/file:io_u.c:1603, func=io_u error, error=Input/output error

job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1603, func=io_u error, error=Input/output error): pid=7980: Wed Apr 6 20:

VMware ESXi
Support for ESXi is minimal.

commit c89318761586dbd77b8f31ce0ed68ff4fc086c89 Author: Jens Axboe  Date:  Wed Jun 18 15:30:09 2014 -0700

Add support for compiling for ESX

With contributions from Ryan Haynes 

Signed-off-by: Jens Axboe 

From within a ESXi ddk build environment: (will need a separate build for 5.x and 6.x) mkdir -p ~/.src ; cd ~/.src git clone git://git.kernel.dk/fio.git cd fio git checkout c89318761586dbd77b8f31ce0ed68ff4fc086c89
 * 1) sinter esxi5

make clean ./configure --esx make
 * 1) build

ldd fio
 * 1) verify linked

scp fio root@vmware-server:/scratch/ ssh root@vmware-server
 * 1) copy to test server

cd /scratch ./fio --help
 * 1) test

cd /scratch strace ./fio
 * 1) debug

./fio --name=job1 --filename=/vmfs/volumes/some_datastore/test.txt --size=1m --rw=write ./fio --name=job1 --filename=/vmfs/volumes/some_datastore/test.txt --size=1m
 * 1) quick write test
 * 1) quick read test

GitHub Issue:
 * libaio library not found on ESXi · Issue #80 · axboe/fio · GitHub - https://github.com/axboe/fio/issues/80
 * "Sounds like the binary should be statically linked against lbaio instead of dynamically in the case of esxi."
 * Disable libaio for ESXi build - bug#80 by kennethburgener · Pull Request #81 · axboe/fio - https://github.com/axboe/fio/pull/81
 * Disable libaio for ESXi build - bug#80 by ezrapedersen · Pull Request #138 · axboe/fio - https://github.com/axboe/fio/pull/138

Note: A ESXi 6.0/5.5 build will not work on ESXi 5.0/5.1, but the reverse (5.0 on 6.0) does appear to work. ./fio: /lib64/libc.so.6: version `GLIBC_2.7' not found (required by ./fio) ./fio: /lib64/libc.so.6: version `GLIBC_2.10' not found (required by ./fio) ./fio: /lib64/libc.so.6: version `GLIBC_2.6' not found (required by ./fio)

Fusion-io Compiling the fio Utility
The fio benchmarking utility is used to verify Linux system performance with an ioDrive. To compile the fio utility, perform the following steps

Grab the latest version of fio source from the tarball link in http://freshmeat.net/projects/fio

Install the necessary standard dependencies. For example:

$ yum -y install gcc

Install the necessary libaio development headers. For example:

$ yum -y install libaio-devel

Explode the fio tarball:

$ tar xjvf fio-X.Y.Z.tar.bz2 $ cd fio-X.Y.Z

Build fio for your system:

$ make

When the rebuild completes successfully, a fio binary is placed in the fio directory.

Job Options
Job Options: name=str	# job name (if 'global' apply to all jobs) filename=str	# target file or drive (/dev/sdb) readwrite=str	# (rw=str) traffic type: read, write, randread, randwrite, rw/readwrite, randrw size=int	# size for job (1k, 1m, 1g, 20%, etc) blocksize=int	# block size: 4k (default), 1k, 1m, 1g, etc ioengine=str	# io engine: sync (default), libaio (linux native asyncrhonous io), etc iodepth=int	# I/O queue size direct=bool	# use non-buffered I/O (default: 0 - false) thinktime=int	# stall job microseconds between issuing I/Os nice=int	# run job at this nice value cpuload=int	# limit percenage of CPU cycles runtime=int	# number of seconds to run loops=int	# interations for job (default: 1) numjobs=int	# number of clones of job (default: 1)

Basic Traffic Tests
Simple read job: fio --name=job1 --filename=/dev/sdb fio --name=job1 --filename=/dev/sdb --size=10g

Simple write job: fio --name=job1 --filename=/dev/sdb --size=10g --rw=write

Simple read job: fio --name=job1 --filename=/dev/sdb --size=10g --rw=read

Simple two process write job: fio --name=global --filename=/dev/sdb --size=10g --rw=write --name=job1 --name=job2

Complex Write random traffic to /dev/sdb: fio --name=random-writers --ioengine=libaio --iodepth=1 --rw=randwrite --bs=32k --direct=0 --numjobs=1 --loops=2 --filename=/dev/sdb

High bandwidth job: fio --name=job1 --ioengine=libaio --iodepth=4 --rw=write --bs=1m --direct=1 --numjobs=1 --size=30g --filename=/dev/sdb

Unaligned DMA: (FreeBSD) fio --name=job1 --filename=/dev/ad1 --size=5g --iomem_align=1 --rw=write --bs=256k --thread

Random read job and write job: fio --name=global --ioengine=libaio --iodepth=4 --bs=4k --direct=0 --loops=2 --filename=/dev/sdb --name=randread --rw=randread --name=randwrite --rw=randwrite

Basic Report
Read Report: job1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1 fio-2.0.11-46-g5a90 Starting 1 process Jobs: 1 (f=1): [R] [100.0% done] [623.4M/0K/0K /s] [160K/0 /0 iops] [eta 00m:00s] job1: (groupid=0, jobs=1): err= 0: pid=1895: Sat Dec 8 23:20:13 2012 read : io=10240MB, bw=637471KB/s, iops=159367, runt= 16449msec clat (usec): min=0, max=2083 , avg= 5.78, stdev=47.34 lat (usec): min=0, max=2083 , avg= 5.86, stdev=47.35 clat percentiles (usec): | 1.00th=[    0],  5.00th=[    0], 10.00th=[    1], 20.00th=[    1], | 30.00th=[   1], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1], | 70.00th=[   1], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1], | 99.00th=[ 106], 99.50th=[  532], 99.90th=[  564], 99.95th=[  604], | 99.99th=[ 812] bw (KB/s) : min=597504, max=655360, per=100.00%, avg=637531.25, stdev=10370.41 lat (usec) : 2=96.31%, 4=2.01%, 10=0.01%, 20=0.09%, 50=0.02% lat (usec) : 100=0.52%, 250=0.32%, 500=0.15%, 750=0.55%, 1000=0.04% lat (msec) : 2=0.01%, 4=0.01% cpu         : usr=8.70%, sys=40.61%, ctx=22011, majf=0, minf=27 IO depths   : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit   : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued   : total=r=2621440/w=0/d=0, short=r=0/w=0/d=0
 * 1) fio --filename=/dev/sdb --name=job1 --size=10g

Run status group 0 (all jobs): READ: io=10240MB, aggrb=637470KB/s, minb=637470KB/s, maxb=637470KB/s, mint=16449msec, maxt=16449msec

Disk stats (read/write): sdb: ios=42796/0, merge=2575677/0, ticks=26606/0, in_queue=26577, util=81.95%

Write Report: job1: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1 fio-2.0.11-46-g5a90 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/163.5M/0K /s] [0 /41.9K/0 iops] [eta 00m:00s] job1: (groupid=0, jobs=1): err= 0: pid=1902: Sat Dec 8 23:22:21 2012 write: io=10240MB, bw=304456KB/s, iops=76113, runt= 34441msec clat (usec): min=1, max=926829 , avg=12.00, stdev=815.23 lat (usec): min=2, max=926829 , avg=12.19, stdev=815.23 clat percentiles (usec): | 1.00th=[    2],  5.00th=[    2], 10.00th=[    2], 20.00th=[    2], | 30.00th=[   2], 40.00th=[    3], 50.00th=[    3], 60.00th=[    4], | 70.00th=[   4], 80.00th=[    4], 90.00th=[    6], 95.00th=[   10], | 99.00th=[  18], 99.50th=[   32], 99.90th=[ 1736], 99.95th=[ 6048], | 99.99th=[14656] bw (KB/s) : min=  132, max=741784, per=100.00%, avg=312652.71, stdev=74487.53 lat (usec) : 2=0.01%, 4=59.53%, 10=35.16%, 20=4.46%, 50=0.54% lat (usec) : 100=0.04%, 250=0.11%, 500=0.05%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.04%, 10=0.03%, 20=0.02%, 50=0.01% lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% cpu         : usr=11.07%, sys=30.73%, ctx=7795, majf=0, minf=27 IO depths   : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit   : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued   : total=r=0/w=2621440/d=0, short=r=0/w=0/d=0
 * 1) fio --filename=/dev/sdb --name=job1 --size=10g --rw=write

Run status group 0 (all jobs): WRITE: io=10240MB, aggrb=304455KB/s, minb=304455KB/s, maxb=304455KB/s, mint=34441msec, maxt=34441msec

Disk stats (read/write): sdb: ios=83/19960, merge=0/2541727, ticks=5/4637755, in_queue=4655340, util=99.35%

Benchmark
simple job file: defines two processes, each randomly reading from a 128MB file:

[global] rw=randread size=128m

[job1]

[job2]

command line version: fio --name=global --rw=randread --size=128m --name=job1 --name=job2

---

4 processes each randomly writing to their own files:
 * We want to use async io here
 * depth of 4 for each file
 * increased the buffer size used to 32KB
 * numjobs to 4 to fork 4 identical jobs
 * 64MB file

[random-writers] ioengine=libaio iodepth=4 rw=randwrite bs=32k direct=0 size=64m numjobs=4

command line version: fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4

---

Fusion-io: fio random 512 read peak IOPS: (Run for 150 seconds (120 + 30) or 2.5 minutes) [global] readwrite=randrw rwmixread=100 blocksize=512 ioengine=libaio numjobs=4 thread=1 direct=1 iodepth=32 iodepth_batch=16 iodepth_batch_complete=16 group_reporting=1 ramp_time=30 norandommap=1 description=fio random 512 read peak IOPS time_based=1 runtime=120

[/dev/sdb] filename=/dev/sdb

Command: fio --minimal --output=fio.output fio-job.ini

Inspecting disk IO performance with fio
Inspecting disk IO performance with fio | Linux.com - https://www.linux.com/learn/tutorials/442451-inspecting-disk-io-performance-with-fio
 * Linux.com :: Inspecting disk IO performance with fio (dead link)
 * Linux Disk Benchmarking – IO Performance With fio Tool - http://www.cyberciti.biz/tips/linux-disk-benchmarking-io.html

Linux.com has published article about a new tool called fio:
 * fio was created to allow benchmarking specific disk IO workloads. It can issue its IO requests using one of many synchronous and asynchronous IO APIs, and can also use various APIs which allow many IO requests to be issued with a single API call. You can also tune how large the files fio uses are, at what offsets in those files IO is to happen at, how much delay if any there is between issuing IO requests, and what if any filesystem sync calls are issued between each IO request. A sync call tells the operating system to make sure that any information that is cached in memory has been saved to disk and can thus introduce a significant delay. The options to fio allow you to issue very precisely defined IO patterns and see how long it takes your disk subsystem to complete these tasks.

random-read-test.fio
"The first test you might like to perform is for random read IO performance. This is one of the nastiest IO loads that can be issued to a disk, because it causes the disk head to seek a lot, and disk head seeks are extremely slow operations relative to other hard disk operations. One area where random disk seeks can be issued in real applications is during application startup, when files are requested from all over the hard disk. You specify fio benchmarks using configuration files with an ini file format. You need only a few parameters to get started. rw=randread tells fio to use a random reading access pattern, size=128m specifies that it should transfer a total of 128 megabytes of data before calling the test complete, and the directory parameter explicitly tells fio what filesystem to use for the IO benchmark. On my test machine, the /tmp filesystem is an ext3 filesystem stored on a RAID-5 array consisting of three 500GB Samsung SATA disks. If you don't specify directory, fio uses the current directory that the shell is in, which might not be what you want. "

random-read-test.fio:
 * random read of 128mb of data

[random-read] rw=randread size=128m directory=/tmp/fio-testing/data

Test execution: $ fio random-read-test.fio random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1 Starting 1 process random-read: Laying out IO file(s) (1 file(s) / 128MiB) Jobs: 1 (f=1): [r] [100.0% done] [ 3588/     0 kb/s] [eta 00m:00s] random-read: (groupid=0, jobs=1): err= 0: pid=30598 read : io=128MiB, bw=864KiB/s, iops=211, runt=155282msec clat (usec): min=139, max=148K, avg=4736.28, stdev=6001.02 bw (KiB/s) : min= 227, max= 5275, per=100.12%, avg=865.00, stdev=362.99 cpu         : usr=0.07%, sys=1.27%, ctx=32783, majf=0, minf=10 IO depths   : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% issued r/w: total=32768/0, short=0/0 lat (usec): 250=34.92%, 500=0.36%, 750=0.02%, 1000=0.05% lat (msec): 2=0.41%, 4=12.80%, 10=44.96%, 20=5.16%, 50=0.94% lat (msec): 100=0.37%, 250=0.01%

Run status group 0 (all jobs): READ: io=128MiB, aggrb=864KiB/s, minb=864KiB/s, maxb=864KiB/s, mint=155282msec, maxt=155282msec

Disk stats (read/write): dm-6: ios=32768/148, merge=0/0, ticks=154728/12490, in_queue=167218, util=99.59%

random-read-test-aio.fio
random-read-test-aio.fio:
 * random read of 128mb of data with libaio

[random-read] rw=randread size=128m directory=/tmp/fio-testing/data

ioengine=libaio iodepth=8 direct=1 invalidate=1

Execution: $ fio random-read-test-aio.fio random-read: (groupid=0, jobs=1): err= 0: pid=31318 read : io=128MiB, bw=2,352KiB/s, iops=574, runt= 57061msec slat (usec): min=8, max=260, avg=25.90, stdev=23.23 clat (usec): min=1, max=124K, avg=13901.91, stdev=12193.87 bw (KiB/s) : min=   0, max= 5603, per=97.59%, avg=2295.43, stdev=590.60 ... IO depths    : 1=0.1%, 2=0.1%, 4=4.0%, 8=96.0%, 16=0.0%, 32=0.0%, >=64=0.0% ... Run status group 0 (all jobs): READ: io=128MiB, aggrb=2,352KiB/s, minb=2,352KiB/s, maxb=2,352KiB/s, mint=57061msec, maxt=57061msec

high load test
fio --rw=randrw --bs=64k --numjobs=64 --iodepth=64 --direct=1 --sync=0 --ioengine=libaio --name=test \ --loops=10000 --size=9654901800 --runtime=600 --rwmixwrite=100 --do_verify=0 --filename=/dev/fioa --thread

fio --rw=randrw --bs=64k --numjobs=64 --iodepth=64 --direct=1 --sync=0 --ioengine=libaio --name=test \ --loops=10000 --size=9654901800 --runtime=600 --rwmixwrite=100 --do_verify=0 --filename=/dev/fiob --thread

Source: Fusion thermal monitoring test

Mix: (adjust rwmixwrite) fio --rw=randrw --bs=64k --numjobs=64 --iodepth=64 --direct=1 --sync=0 --ioengine=libaio --name=test \ --loops=10000 --size=9654901800 --runtime=600 --rwmixwrite=60 --do_verify=0 --filename=/dev/fioa --thread

New ETA
Do not report live ETA: fio --eta=never ...

Trim
fio can issue trim command: fio --name=test --random_generator=lfsr --ioengine=sg --rw=randtrim --filename=/dev/sg4 --iodepth=8 --iodepth_batch=8 --loops=3

Minimal Output
Fio minimal output field indexes - https://www.andypeace.com/fio_minimal.html

"fio, the flexible IO tester, is a very useful tool for benchmarking IO performance. It has an option to produce minimal output, which is very useful when gathering data for later processing, e.g. graphing. The man page for fio describes the output format, but does not number the fields. This means that when extracting fields, one must count the fields in the man page to find the correct index to extract."

Field	Description 1	terse version 2	fio version 3	jobname 4	groupid 5	error Read status: 6	Total I/O (KB) 7	bandwidth (KB/s) 8	IOPS 9	runtime (ms) Submission latency: 10	min 11	max 12	mean 13	standard deviation Completion latency: 14	min 15	max 16	mean 17	standard deviation Completion latency percentiles (20 fields): 18-37	Xth percentile=usec Total latency: 38	min 39	max 40	mean 41	standard deviation Bandwidth: 42	min 43	max 44	aggregate percentage of total 45	mean 46	standard deviation Write status: 47	Total I/O (KB) 48	bandwidth (KB/s) 49	IOPS 50	runtime (ms) Submission latency: 51	min 52	max 53	mean 54	standard deviation Completion latency: 55	min 56	max 57	mean 58	standard deviation Completion latency percentiles (20 fields): 59-78	Xth percentile=usec Total latency: 79	min 80	max 81	mean 82	standard deviation Bandwidth: 83	min 84	max 85	aggregate percentage of total 86	mean 87	standard deviation CPU usage: 88	user 89	system 90	context switches 91	major page faults 92	minor page faults IO depth distribution: 93	<=1 94	2 95	4 96	8 97	16 98	32 99	>=64 IO latency distribution (microseconds): 100	<=2 101	4 102	10 103	20 104	50 105	100 106	250 107	500 108	750 109	1000 IO latency distribution (milliseconds): 110	<=2 111	4 112	10 113	20 114	50 115	100 116	250 117	500 118	750 118	1000 120	2000 121	>=2000 Disk utilization (1 for each disk used, for disk n, n is zero-based): 122+10n	name 123+10n	read ios 124+10n	write ios 125+10n	read merges 126+10n	write merges 127+10n	read ticks 128+10n	write ticks 129+10n	read in-queue time 130+10n	write in-queue time 131+10n	disk utilization 132+10n	percentage Error Info (dependent on continue_on_error, default off): F-1	total # errors F-0	first error code newline	text description (if provided in config - appears on newline)

Samples
Running a read zero job: (Kep K.) [global] readwrite=randrw rwmixread=100 blocksize=512 ioengine=libaio numjobs=1 thread=0 direct=1 iodepth=32 iodepth_batch=16 iodepth_batch_complete=16 group_reporting=1 norandommap=1 description=fio random 512 read zero IOPS time_based=1 runtime=30 randrepeat=0
 * 1) ramp_time=5

[/dev/fioa] filename=/dev/fioa cpus_allowed=1 startdelay=1

[/dev/fioa] filename=/dev/fioa cpus_allowed=2 startdelay=2

[/dev/fioa] filename=/dev/fioa cpus_allowed=3 startdelay=3

[/dev/fioa] filename=/dev/fioa cpus_allowed=4 startdelay=4

Linked Clones Example
by Kenneth Burgener

[random-writers] ioengine=libaio iodepth=1 rw=randwrite bs=32k direct=0 numjobs=1 loops=99999 filename=/dev/sdb1
 * nice=19
 * thinktime=10000
 * size=1g
 * cpuload=10

Command line version: fio --name=random-writers --ioengine=libaio --iodepth=1 --rw=randwrite --bs=32k --direct=0 --numjobs=1 --loops=99999 --filename=/dev/sdb

Bandwidth Test
[global] readwrite=randrw rwmixread=0 blocksize=1M ioengine=libaio numjobs=4 thread=0 direct=1 iodepth=32 iodepth_batch=16 iodepth_batch_complete=16 group_reporting=1 ramp_time=5 norandommap=1 description=fio random 1M write peak BW time_based=1 runtime=60 randrepeat=0

[/dev/sdb] filename=/dev/sdb

Only changing the blocksize in the following fio config:
 * 8K| WRITE: io=24378MB, aggrb=416040KB/s, minb=426025KB/s,
 * 8K| maxb=426025KB/s, mint=60002msec, maxt=60002msec|
 * 16K| WRITE: io=37994MB, aggrb=648328KB/s, minb=663888KB/s,
 * 16K| maxb=663888KB/s, mint=60009msec, maxt=60009msec|
 * 32K| WRITE: io=50756MB, aggrb=866155KB/s, minb=886943KB/s,
 * 32K| maxb=886943KB/s, mint=60006msec, maxt=60006msec|
 * 64K| WRITE: io=60554MB, aggrb=1009.5MB/s, minb=1033.3MB/s,
 * 64K| maxb=1033.3MB/s, mint=60011msec, maxt=60011msec|
 * 128K| WRITE: io=18419MB, aggrb=314246KB/s, minb=321788KB/s,
 * 128K| maxb=321788KB/s, mint=60020msec, maxt=60020msec|
 * 256K| WRITE: io=21555MB, aggrb=366735KB/s, minb=375536KB/s,
 * 256K| maxb=375536KB/s, mint=60186msec, maxt=60186msec|
 * 512K| WRITE: io=29526MB, aggrb=502411KB/s, minb=514469KB/s,
 * 512K| maxb=514469KB/s, mint=60179msec, maxt=60179msec|
 * 1M| WRITE: io=38220MB, aggrb=649344KB/s, minb=664928KB/s,
 * 1M| maxb=664928KB/s, mint=60272msec, maxt=60272msec|
 * 2M| WRITE: io=34552MB, aggrb=585724KB/s, minb=599781KB/s,
 * 2M| maxb=599781KB/s, mint=60406msec, maxt=60406msec|
 * 4M| WRITE: io=34992MB, aggrb=588999KB/s, minb=603135KB/s,
 * 4M| maxb=603135KB/s, mint=60835msec, maxt=60835msec|

Another run involves reducing these: iodepth=16 iodepth_batch=8 iodepth_batch_complete=8

Windows Sample
Thane H.: Here is a sample ini file that uses raw devices: [global] rw=read name=fire verify=crc32c bs=4096b iodepth=256 size=64g thread [thread_fct0-0] filename=\\.\PhysicalDrive1 [thread_fct0-1] filename=\\.\PhysicalDrive1 [thread_fct2-0] filename=\\.\PhysicalDrive0 [thread_fct2-1] filename=\\.\PhysicalDrive0

Another sample:
 * 1) fio --name=job1 --filename=e: --size=5g --rw=write --bs=256k --thread

List devices: wmic diskdrive list brief

Unaligned DMA
iomem_align=int This indiciates the memory alignment of the IO memory buffers. Note that the given alignment is applied to               the first IO unit buffer, if using iodepth the alignment of the following buffers are given by the bs used. In other words, if using a bs that is a multiple of the page sized in the system, all buffers will be               aligned to this value. If using a bs that is not page aligned, the alignment of subsequent IO memory buffers is the sum of the iomem_align and bs used.

Linux: fio --name=job1 --filename=/dev/fioa --size=5g --iomem_align=1 --rw=write --bs=256k

FreeBSD: fio --name=job1 --filename=/dev/ad1 --size=5g --iomem_align=1 --rw=write --bs=256k --thread fio --name=job1 --filename=/dev/ad1 --size=1M --iomem_align=1 --direct=1 --rw=write --bs=256k --thread

Windows 7: C:\Program Files\fio>fio --name=job1 --filename=e: --size=5g --rw=write --bs=256k --thread --iomem_align=1 --direct=1

Windows 7 Raw Device: (Must run as Administrator) C:\Program Files\fio>fio --name=job1 --filename=\\.\PHYSICALDRIVE1 --size=5g --rw=write --bs=256k --thread --iomem_align=1 --direct=1

Misc
fio-4k-r-qd8.ini [global] readwrite=read rwmixread=100 blocksize=4k ioengine=aio numjobs=16 thread=1 direct=1 iodepth=16 iodepth_batch=16 iodepth_batch_complete=16 group_reporting=1 ramp_time=5 norandommap=1 description=fio 4k read IOPS QD 8 time_based=1 runtime=300 randrepeat=0

[/dev/nbd0] filename=/dev/nbd0

More Examples
Linux: fio --name=job1 --filename=/dev/sdb --iodepth=1 --bs=256k --rw=write --direct=1 --runtime=30 --ioengine=libaio

fio --name=job1 --filename=/dev/sdb --iodepth=1 --bs=256k --rw=write --direct=1 --runtime=30

fio --name=job1 --filename=/dev/sdb --iodepth=1 --bs=256k --rw=read --direct=1 --runtime=30

fio --name=job1 --filename=/dev/sdb --iodepth=32 --bs=256k --rw=write --direct=1 --runtime=60

fio --name=job1 --filename=/dev/sdb --iodepth=32 --bs=256k --rw=read --direct=1 --runtime=60

fio --name=job1 --filename=/dev/sdb --iodepth=32 --bs=256k --rw=write --direct=1 --name=job2 --rw=read --filename=/dev/sdb --iodepth=32 --direct=1 --bs=256k

fio --name=job1 --filename=/dev/sdb --iodepth=32 --bs=256k --rw=write --direct=1 --thread --runtime=120

fio --name=job1 --filename=/dev/sdb --iodepth=32 --bs=256k --rw=read --direct=1 --thread --runtime=120

fio --name=global --ioengine=libaio --iodepth=32 --bs=4k --direct=0 --loops=2 --filename=/dev/sdb --name=randread --rw=randread --name=randwrite --rw=randwrite

fio --output=output.txt --filename=/dev/nvme0n1 --iodepth=128 --bs=4k --thread=2 --direct=1 --ioengine=libaio --name=write-phase --rw=write --name=read-phase --rw=read

fio --output=output.txt --filename=/dev/nvme0n1 --iodepth=128 --bs=4k --thread=1 --direct=1 --ioengine=libaio --name=write-phase \ --do_verify=0 --rw=write --name=read-phase --stonewall --do_verify=0 --time_based=1 --loops=999 --runtime=24h --rw=read

Windows: wmic diskdrive list brief

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=1 --bs=256k --rw=write --direct=1 --thread --runtime=30

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=1 --bs=256k --rw=read --direct=1 --thread --runtime=30

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=32 --bs=256k --rw=write --direct=1 --thread

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=32 --bs=256k --rw=read --direct=1 --thread

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=32 --bs=256k --rw=write --direct=1 --name=job2 --rw=read --filename=\\.\PhysicalDrive1 --iodepth=32 --direct=1 --bs=256k --thread

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=32 --bs=256k --rw=write --direct=1 --thread --runtime=120

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=32 --bs=256k --rw=read --direct=1 --thread --runtime=120

fio --name=job1 --filename=\\.\PhysicalDrive1 --iodepth=1 --bs=4k --rw=write --direct=1 --thread --runtime=60

fio --name=global --iodepth=32 --bs=4k --direct=0 --loops=2 --filename=\\.\PhysicalDrive1 --name=randread --rw=randread --name=randwrite --rw=randwrite

nvme random write
fio --name=rand_write_job --filename=/dev/nvme0n1 --iodepth=128 --bs=4k --rw=randwrite --direct=1 --runtime=24h --time_based --ioengine=libaio

fio --name=rand_write_job --filename=/dev/nvme0n1 --iodepth=128 --bs=4k --rw=randwrite --direct=1 --runtime=24h --time_based --ioengine=libaio norandommap random_generator=tausworthe

norandommap random_generator=tausworthe


 * "Normally fio will cover every block of the file when doing random I/O. If this parameter is given, a new offset will be chosen without looking at past I/O history. This parameter is mutually exclusive with verify."

Output to file statistics
fio --name=global --status-interval=10 --output fio.txt --output-format=terse --eta-newline=10 --numjobs=1 --iodepth=32 --direct=1 --ioengine=libaio --max_latency=5s \ --norandommap --filename=/dev/sda --name=precondjob --rw=write --bs=128k --fill_device=1

fio --name=global --status-interval=60 --output fio.txt --eta-newline=10 --numjobs=1 --iodepth=32 --direct=1 --ioengine=libaio --max_latency=5s --norandommap \ --filename=/dev/sda --name=precondjob --rw=write --bs=128k --size=2GB --stonewall --name=readjob --loops=10 --rw=read --bs=128k --stonewall

Report full status every 1 minute: --status-interval=1m

fill then read forever
Limit to only first 20GB

fill-read-forever.fio: [write] rw=write bs=128k direct=1 ioengine=libaio iodepth=128 size=20G filename=/dev/nvme0n1 stonewall

[read] rw=read bs=128k direct=1 ioengine=libaio iodepth=128 size=20G filename=/dev/nvme0n1 runtime=9999h time_based stonewall

Standard
/dev/sdb /dev/nvme0n1

Precondition Write (full drive write)
fio --name=global --iodepth=32 --bs=128k --direct=1 --ioengine=libaio \ --filename=/dev/nvme0n1 --name=precondjob --rw=write --fill_device=1

Sequential Write (time based)
fio --name=global --iodepth=32 --bs=128k --direct=1 --ioengine=libaio \ --filename=/dev/nvme0n1 --name=writejob --rw=write --runtime=24h --time_based

Precondition: (fill drive) fio --name=global --iodepth=32 --bs=128k --direct=1 --ioengine=libaio \ --filename=/dev/nvme0n1 --name=precondition --rw=write --fill_device=1

Sequential Read
fio --name=global --iodepth=32 --bs=128k --direct=1 --ioengine=libaio \ --filename=/dev/nvme0n1 --name=readjob --rw=read --runtime=24h --time_based

Sequential Read (full drive read)
fio --name=global --iodepth=32 --bs=128k --direct=1 --ioengine=libaio \ --filename=/dev/nvme0n1 --name=readjob --rw=read

Precondition Write
fio --name=global --status-interval=60 --output fio-precond.txt --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio --max_latency=5s --norandommap \ --filename=/dev/sdb --name=precondjob --rw=write --fill_device=1

Sequential Write
fio --name=global --status-interval=60 --output fio-write.txt --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio --max_latency=5s --norandommap \ --filename=/dev/sdb --name=writejob --rw=write --runtime=24h --time_based

With Precondition: fio --name=global --status-interval=60 --output fio-precond.txt --iodepth=128 --bs=128k --direct=1 \ --ioengine=libaio --max_latency=5s --norandommap \ --filename=/dev/nvme0n1 \ --name=precondjob --rw=write --fill_device=1 \ --name=writejob --rw=write --runtime=24h --time_based --stonewall

Sequential Read
fio --name=global --status-interval=60 --output fio-read.txt --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio --max_latency=5s --norandommap \ --filename=/dev/sdb --name=readjob --rw=read --runtime=24h --time_based

Random Read Random Write
fio --name=global --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio \ --filename=/dev/nvme0n1 \ --name=randwrite --rw=randwrite \ --name=randread --rw=randread

fio --name=global --runtime=10m --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio \ --filename=/dev/nvme0n1 \ --name=randwrite --rw=randwrite \ --name=randread --rw=randread

fio --name=global --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio --max_latency=5s --norandommap \ --filename=/dev/md0 \ --name=randwrite --rw=randwrite --runtime=10m --time_based \ --name=randread --rw=randread --runtime=10m --time_based

Full Sequential Read Write
fio --name=global --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio \ --filename=/dev/nvme0n1 \ --name=write --rw=write \ --name=read --rw=read

fio --name=global --runtime=10m --iodepth=32 --bs=128k --direct=1 \ --ioengine=libaio \ --filename=/dev/nvme0n1 \ --name=write --rw=write \ --name=read --rw=read

keywords
linux benchmarking performance benchmark