Difference between revisions of "Fio iops test ; centos"

From vpsget wiki
Jump to: navigation, search
 
Line 52: Line 52:
 
Example test and output:
 
Example test and output:
  
 +
Test on VPS. 512GB RAMm, 1vCPU, 5GB HDD
 
<pre>
 
<pre>
 
  [root@iotest2 /]#  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
 
  [root@iotest2 /]#  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Line 71: Line 72:
 
   READ: io=3071.7MB, aggrb=838323KB/s, minb=838323KB/s, maxb=838323KB/s, mint=3752msec, maxt=3752msec
 
   READ: io=3071.7MB, aggrb=838323KB/s, minb=838323KB/s, maxb=838323KB/s, mint=3752msec, maxt=3752msec
 
   WRITE: io=1024.4MB, aggrb=279561KB/s, minb=279561KB/s, maxb=279561KB/s, mint=3752msec, maxt=3752msec
 
   WRITE: io=1024.4MB, aggrb=279561KB/s, minb=279561KB/s, maxb=279561KB/s, mint=3752msec, maxt=3752msec
 +
</pre>
 +
 +
Test on VPS 8GB RAM, 80GB SSD, 8vCPUs:
 +
<pre>
 
[root@iotest2 /]#  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
 
[root@iotest2 /]#  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
 
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
 
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
Line 90: Line 95:
 
   
 
   
 
</pre>
 
</pre>
We can see that read iops=222887  , write iops=74327 . The same result dublicated in upper lines:
+
 
[963.4M/320.1M/0K /s] [247K/82.2K/0  iops]
+
We can see that in last test  read iops=222887  , write iops=74327 . The same result dublicated in upper lines:
 +
[963.4M/320.1M/0K /s] [247K/82.2K/0  iops] than a more faster than on low-end box vps with the same storage level (raid-10 pure ssd)
  
  
 
more:  
 
more:  
 
https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o
 
https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o

Latest revision as of 23:01, 11 November 2015

Install fio on centos:

yum install libaio* gcc wget make
wget http://brick.kernel.dk/snaps/fio-2.0.14.tar.gz
tar -xvf fio-2.0.14.tar.gz
cd fio-2.0.14
./configure
make
make install


FIO test examples:

Random Read/Write 4gb . read 75/write/25

 fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75


Randow Write 4Gb:

 fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

Random Read (via mixread 100)

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G  --readwrite=randrw --rwmixread=100

The same RandRead/Write[mixread 75/25] via another ioengine posixaio:

fio --randrepeat=1 --ioengine=posixaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

Also you can create the .fio files Edit iotest1.fio. readwrite 1G

[global]
ioengine=posixaio
rw=readwrite
size=1g
directory=${HOME}/ 
thread=1
[trivial-readwrite-1g]

start fio files like:

fio fiofile.fio

Another example. readwrite 2gb 2 thread:

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=${HOME}/
thread=2
[trivial-readwrite-1g]



Example test and output:

Test on VPS. 512GB RAMm, 1vCPU, 5GB HDD

 [root@iotest2 /]#  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.14
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [75.0% done] [860.1M/286.4M/0K /s] [220K/73.4K/0  iops] [eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=3500: Wed Nov 11 18:56:13 2015
  read : io=3071.7MB, bw=838323KB/s, iops=209580 , runt=  3752msec
  write: io=1024.4MB, bw=279562KB/s, iops=69890 , runt=  3752msec
  cpu          : usr=13.04%, sys=86.88%, ctx=4, majf=0, minf=20
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=838323KB/s, minb=838323KB/s, maxb=838323KB/s, mint=3752msec, maxt=3752msec
  WRITE: io=1024.4MB, aggrb=279561KB/s, minb=279561KB/s, maxb=279561KB/s, mint=3752msec, maxt=3752msec

Test on VPS 8GB RAM, 80GB SSD, 8vCPUs:

[root@iotest2 /]#  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.14
Starting 1 process
Jobs: 1 (f=1): [m] [-.-% done] [963.4M/320.1M/0K /s] [247K/82.2K/0  iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3503: Wed Nov 11 18:56:22 2015
  read : io=3071.7MB, bw=891550KB/s, iops=222887 , runt=  3528msec
  write: io=1024.4MB, bw=297312KB/s, iops=74327 , runt=  3528msec
  cpu          : usr=14.60%, sys=85.28%, ctx=4, majf=0, minf=21
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=891549KB/s, minb=891549KB/s, maxb=891549KB/s, mint=3528msec, maxt=3528msec
  WRITE: io=1024.4MB, aggrb=297311KB/s, minb=297311KB/s, maxb=297311KB/s, mint=3528msec, maxt=3528msec
 

We can see that in last test read iops=222887 , write iops=74327 . The same result dublicated in upper lines: [963.4M/320.1M/0K /s] [247K/82.2K/0 iops] than a more faster than on low-end box vps with the same storage level (raid-10 pure ssd)


more: https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o