I/O Benchmarking tools
This blog post will be a place to park ideas and experiences with I/O benchmark tools and will be updated on an ongoing basis.
Please feel free to share your own experiences with these tools or others in the comments!
There are a number of tools out there to do I/O benchmark testing such as
My choice for best of breed is fio (thanks to Eric Grancher for suggesting fio).
IOZone, available at http://linux.die.net/man/1/iozone, is the tool I see the most references to on the net and google searches. The biggest drawback of IOZone is the there seems to be no way to limit the test to 8K random reads. Example
Bonnie is a close to IOZone, but not quite as flexible and even less flexible than fio. Example.
Haven’t investigated FileBench though looks interesting.
not much info
Fio – flexible I/O tester
Here is a description from the above URL:
“fio is an I/O tool meant to be used both for benchmark and stress/hardware verification. It has support for 13 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. fio displays all sorts of I/O performance information. Fio is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OS X, OpenSolaris, AIX, HP-UX, and Windows.”
with fio, setup options in a file with benchmark configuration, example
# job name between brackets (except when value is "global" ) [read_8k_200MB] # overwrite if true will create file if it doesn't exist # if file exists and is large enough nothing happens # here it is set to false because file should exist overwrite=0 #rw= # read Sequential reads # write Sequential writes # randwrite Random writes # randread Random reads # rw Sequential mixed reads and writes # randrw Random mixed reads and writes rw=read # ioengine= # sync Basic read(2) or write(2) io. lseek(2) is # used to position the io location. # psync Basic pread(2) or pwrite(2) io. # vsync Basic readv(2) or writev(2) IO. # libaio Linux native asynchronous io. # posixaio glibc posix asynchronous io. # solarisaio Solaris native asynchronous io. # windowsaio Windows native asynchronous io. ioengine=libaio # direct If value is true, use non-buffered io. This is usually # O_DIRECT. Note that ZFS on Solaris doesn't support direct io. direct=1 # bs The block size used for the io units. Defaults to 4k. bs=8k directory=/tmpnfs # fadvise_hint if set to true fio will use fadvise() to advise the kernel # on what IO patterns it is likely to issue. fadvise_hint=0 # nrfiles= Number of files to use for this job. Defaults to 1. nrfiles=1 filename=toto.dbf size=200m
$ fio config_file
read_8k_200MB: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=1 fio 1.50 Starting 1 process Jobs: 1 (f=1): [R] [100.0% done] [8094K/0K /s] [988 /0 iops] [eta 00m:00s] read_8k_200MB: (groupid=0, jobs=1): err= 0: pid=27041 read : io=204800KB, bw=12397KB/s, iops=1549 , runt= 16520msec slat (usec): min=14 , max=2324 , avg=20.09, stdev=15.57 clat (usec): min=62 , max=10202 , avg=620.90, stdev=246.24 lat (usec): min=203 , max=10221 , avg=641.43, stdev=246.75 bw (KB/s) : min= 7680, max=14000, per=100.08%, avg=12407.27, stdev=1770.39 cpu : usr=0.69%, sys=2.62%, ctx=26443, majf=0, minf=26 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w/d: total=25600/0/0, short=0/0/0 lat (usec): 100=0.01%, 250=2.11%, 500=20.13%, 750=67.00%, 1000=3.29% lat (msec): 2=7.21%, 4=0.23%, 10=0.02%, 20=0.01% Run status group 0 (all jobs): READ: io=204800KB, aggrb=12397KB/s, minb=12694KB/s, maxb=12694KB/s, mint=16520msec, maxt=16520msec