Project

General

Profile

Actions

Bug #24959

closed

Inconsistency over fio utility benchmarking

Added by Kavita Gaikwad over 5 years ago. Updated about 2 months ago.

Status:
Rejected
Priority:
Normal
Category:
-
Target version:
-
Difficulty:
Triaged:
No
Fixed in Releases:
Found in Releases:

Description

Cloned from https://bugzilla.redhat.com/show_bug.cgi?id=1622616

Description of problem:

Inconsistency over fio utility benchmarking

Version-Release number of selected component (if applicable):

satellite-installer-6.4.0.7-1.beta.el7sat.noarch
fio-3.1-2.el7.x86_64

How reproducible:

Steps to Reproduce:

1. # foreman-maintain upgrade check --target-version 6.4

Actual results:

pre-upgrade-step fails because we are using a VM:

- Check for recommended disk speed of pulp, mongodb, pgsql dir.:
- Finished

Disk speed : 24 MB/sec [FAIL]
Slow disk detected /var/lib/pulp mounted on /dev/mapper/vg_app1-lv_pulp.
Actual disk speed: 24 MB/sec
Expected disk speed: 80 MB/sec.

Expected results:

- Fio test looks extremely inconsistent while calculating Disk Speed, please review additional information for detailed analysis.

Additional info:

Fio results:

-----------
root@server1:~# sudo fio --name=job1 --rw=read --size=1g --directory=/var/lib/pulp --direct=1

Run status group 0 (all jobs):
READ: bw=24.0MiB/s (26.2MB/s), 24.0MiB/s-24.0MiB/s (26.2MB/s-26.2MB/s), io=1024MiB (1074MB), run=40996-40996msec
-----------

fio test on an other server that has both local disk and a SAN disk. The SAN disk reports the same read performance.

root@server2 ~$ sudo fio --name=job1 --rw=read --size=1g --direct=1 --directory=/hana/log

Run status group 0 (all jobs):
READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=1024MiB (1074MB), run=33264-33264msec

root@server2 ~$ sudo fio --name=job1 --rw=read --size=1g --direct=1 --directory=/var/tmp

Run status group 0 (all jobs):
READ: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=1024MiB (1074MB), run=11129-11129msec

Executing on the same hardware setup with different fio parameters gives a difference view


Also testing different block sizes on the VM gives already different results:

root@server1:~# sudo fio --name=job1 --rw=read --bs=4k --size=1g --directory=/var/lib/pulp --direct=1

Run status group 0 (all jobs):
READ: bw=24.0MiB/s (26.2MB/s), 24.0MiB/s-24.0MiB/s (26.2MB/s-26.2MB/s), io=1024MiB (1074MB), run=41011-41011msec

-----------
root@server1:~# sudo fio --name=job1 --rw=read --bs=8k --size=1g --directory=/var/lib/pulp --direct=1

Run status group 0 (all jobs):
READ: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=1024MiB (1074MB), run=21954-21954msec

-----------
root@server1:~# sudo fio --name=job1 --rw=read --bs=16k --size=1g --directory=/var/lib/pulp --direct=1

Run status group 0 (all jobs):
READ: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=1024MiB (1074MB), run=12141-12141msec


Review the currently IO tests needs clarification on exact :

E.g. bandwidth of latency and different load blocksizes and if sequential or random/

The difference in physical hardware were in the current test mode, the local physcial-HDD was outpefroming the enterprise class AllFlash SAN.

Putting in a bit more randomness and more jibs shows that the SAN beats the local physical-HDD by a factor 8x.


root@server2 ~$ sudo fio --name=randread --rw=randread --direct=1 --bs=8k --numjobs=16 --size=1G --runtime=30 --group_reporting --directory=/var/tmp

Run status group 0 (all jobs):
READ: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=1215MiB (1274MB), run=30048-30048msec

-----------
root@server2 ~$ sudo fio --name=randread --rw=randread --direct=1 --bs=8k --numjobs=16 --size=1G --runtime=30 --group_reporting --directory=/hana/log

Run status group 0 (all jobs):
READ: bw=291MiB/s (306MB/s), 291MiB/s-291MiB/s (306MB/s-306MB/s), io=8745MiB (9169MB), run=30002-30002msec
-----------

Based on the above results it looks like, fio test can not be trusted for real workload simulation.

Actions #1

Updated by Kavita Gaikwad over 5 years ago

  • Assignee changed from Anurag Patel to Kavita Gaikwad
Actions #2

Updated by Eric Helms about 2 months ago

  • Status changed from New to Rejected
Actions

Also available in: Atom PDF