• 云硬盘性能测试工具FIO介绍


    一.云硬盘的性能衡量指标

    云硬盘的性能指标一般通过以下几个指标进行衡量

    • IOPS:每秒的读写次数,单位为次(计数)。存储设备的底层驱动类型决定了不同的IOPS
      总IOPS:每秒执行的I/O操作总次数
      随机读IOPS:每秒指定的随机读I/O操作的平均次数
      随机写IOPS 每秒指定的随机写I/O操作的平均次数
      顺序读IOPS 每秒指定的顺序读I/O操作的平均次数
      顺序写IOPS 每秒指定的顺序写I/O操作的平均次数
    • 吞吐量:每秒的读写数据量,单位为MB/S
      吞吐量市值单位时间内可以成功传输的数据数量。
      如果需要部署大量顺序读写的应用,典型场景比如hadoop离线计算型业务,需要关注吞吐量
    • 时延:IO操作的发送时间到接收确认所经过的时间,单位为秒
      如果应用对时延比较敏感,比如数据库(过高时延会导致应用性能下降或报错),建议使用SSD存储

    二.具体参数说明

    [root@host-10-0-1-36 ~]# fio --bs=4k --ioengine=libaio --iodepth=1 --direct=1 --rw=read --time_based --runtime=600  --refill_buffers --norandommap --randrepeat=0 --group_reporting --name=fio-read --size=100G --filename=/dev/vdb 
    fio-read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
    fio-3.1
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][100.0%][r=9784KiB/s,w=0KiB/s][r=2446,w=0 IOPS][eta 00m:00s]
    fio-read: (groupid=0, jobs=1): err= 0: pid=22004: Wed Oct 10 21:35:42 2018
       read: IOPS=2593, BW=10.1MiB/s (10.6MB/s)(6078MiB/600001msec)
        slat (usec): min=4, max=1532, avg=11.98, stdev= 8.10
        clat (nsec): min=1021, max=66079k, avg=370367.39, stdev=395393.29
         lat (usec): min=44, max=66086, avg=382.88, stdev=399.21
        clat percentiles (usec):
         |  1.00th=[   42],  5.00th=[   44], 10.00th=[   45], 20.00th=[   46],
         | 30.00th=[   48], 40.00th=[   51], 50.00th=[  383], 60.00th=[  578],
         | 70.00th=[  644], 80.00th=[  701], 90.00th=[  783], 95.00th=[  865],
         | 99.00th=[  988], 99.50th=[ 1020], 99.90th=[ 1336], 99.95th=[ 2057],
         | 99.99th=[11338]
       bw (  KiB/s): min= 2000, max=68272, per=99.98%, avg=10370.55, stdev=8123.28, samples=1199
       iops        : min=  500, max=17068, avg=2592.62, stdev=2030.82, samples=1199
      lat (usec)   : 2=0.01%, 20=0.01%, 50=37.97%, 100=10.63%, 250=0.39%
      lat (usec)   : 500=6.09%, 750=31.54%, 1000=12.65%
      lat (msec)   : 2=0.67%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
      lat (msec)   : 100=0.01%
      cpu          : usr=1.43%, sys=5.12%, ctx=1555984, majf=0, minf=35
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwt: total=1556003,0,0, short=0,0,0, dropped=0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
       READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=6078MiB (6373MB), run=600001-600001msec
    
    Disk stats (read/write):
      vdb: ios=1555778/0, merge=0/0, ticks=570518/0, in_queue=569945, util=95.04%
    
    参数详情 含义
    io 执行了多少M的IO
    bw 平均IO带宽
    iops IOPS
    runt 线程运行时间
    slat 提交延迟
    clat 完成延迟
    lat 响应时间
    cpu cpu利用率
    IO depths io队列
    IO submit 单个IO提交要提交的IO数
    IO complete
    IO issued
    IO latency

    三.FIO测试工具参数说明

    [root@host-10-0-1-36 ~]# fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=4096k -size=100G -numjobs=1 -runtime=300 -group_reporting -filename=/dev/vdb -name=Write_PPS_Testing
    

    注意: 测试裸盘可以获得真实的硬盘性能,但直接测试裸盘会破坏文件系统结构,请在测试前提前做好数据备份

    参数 说明
    -direct=1 表示测试时忽略I/O缓存,数据直写
    -iodepth=128 表示使用AIO时,同时发出I/O数的上限为128
    -rw=randwrite 表示测试时的读写策略为随机写(random writes)。作其它测试时可以设置为:randread(随机读random reads)read(顺序读sequential reads) write(顺序写sequential writes)randrw(混合随机读写mixed random reads and writes)
    -ioengine=libaio 表示测试方式为libaio(Linux AIO,异步I/O)。应用程序使用I/O通常有两种方式: 同步:同步的I/O一次只能发出一个I/O请求,等待内核完成才返回。这样对于单个线程iodepth总是小于1,但是可以透过多个线程并发执行来解决。通常会用16−32根线程同时工作将iodepth塞满。异步:异步的I/O通常使用libaio这样的方式一次提交一批I/O请求,然后等待一批的完成,减少交互的次数,会更有效率。
    -bs=4k 表示单次I/O的块文件大小为4 KB。未指定该参数时的默认大小也是4 KB,测试IOPS时,建议将bs设置为一个比较小的值,如本示例中的4k。测试吞吐量时,建议将bs设置为一个较大的值,如本示例中的1024k
    -size=1G 表示测试文件大小为1 GiB
    -numjobs=1 表示测试线程数为1
    -runtime=1000 表示测试时间为1000秒。如果未配置,则持续将前述-size指定大小的文件,以每次-bs值为分块大小写完
    -group_reporting 表示测试结果里汇总每个进程的统计信息,而非以不同job汇总展示信息
    -filename=iotest 指定测试文件的名称,比如iotest。测试裸盘可以获得真实的硬盘性能,但直接测试裸盘会破坏文件系统结构,请在测试前提前做好数据备份
    -name=Rand_Write_Testing 表示测试任务名称为Rand_Write_Testing,可以随意设定

    四.测试结果

    这里我们以4K的数据块测试随机读和随机写来测试最大的IOPS,4M的块测试顺序读和顺序写来测试最大的吞吐量

    • 测试4K块的随机写
    [root@host-10-0-1-36 ~]# fio --bs=4k --ioengine=libaio --iodepth=128 --direct=1 --rw=randwrite --time_based --runtime=300  --refill_buffers --norandommap --randrepeat=0 --group_reporting --name=fio-write --size=1G --filename=/dev/vdb 
    fio-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
    fio-3.1
    Starting 1 process
    Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=3996KiB/s][r=0,w=999 IOPS][eta 00m:00s]
    fio-write: (groupid=0, jobs=1): err= 0: pid=22050: Wed Oct 10 22:29:32 2018
      write: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(3309MiB/300484msec)
        slat (usec): min=2, max=2399, avg= 9.54, stdev=10.19
        clat (usec): min=1180, max=3604.0k, avg=45387.25, stdev=168013.09
         lat (usec): min=1201, max=3604.0k, avg=45397.35, stdev=168013.57
        clat percentiles (usec):
         |  1.00th=[   1713],  5.00th=[   2212], 10.00th=[   2835],
         | 20.00th=[   4015], 30.00th=[   5211], 40.00th=[   6849],
         | 50.00th=[   8979], 60.00th=[  11994], 70.00th=[  17695],
         | 80.00th=[  33162], 90.00th=[  61604], 95.00th=[ 137364],
         | 99.00th=[ 893387], 99.50th=[1266680], 99.90th=[2122318],
         | 99.95th=[2432697], 99.99th=[2969568]
       bw (  KiB/s): min=    8, max=49120, per=100.00%, avg=11603.11, stdev=10950.79, samples=584
       iops        : min=    2, max=12280, avg=2900.77, stdev=2737.69, samples=584
      lat (msec)   : 2=3.16%, 4=16.75%, 10=33.83%, 20=18.49%, 50=14.82%
      lat (msec)   : 100=6.56%, 250=2.86%, 500=1.49%, 750=0.70%, 1000=0.53%
      lat (msec)   : 2000=0.69%, >=2000=0.12%
      cpu          : usr=1.84%, sys=4.16%, ctx=323739, majf=0, minf=28
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
         issued rwt: total=0,847133,0, short=0,0,0, dropped=0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
      WRITE: bw=11.0MiB/s (11.5MB/s), 11.0MiB/s-11.0MiB/s (11.5MB/s-11.5MB/s), io=3309MiB (3470MB), run=300484-300484msec
    
    Disk stats (read/write):
      vdb: ios=91/847074, merge=0/0, ticks=3/38321566, in_queue=38360706, util=100.00%
    
    

    我们可以看到4K的数据块,随机写的最大IOPS为:12280

    • 测试4K的随机读
    [root@host-10-0-1-36 ~]# fio --bs=4k --ioengine=libaio --iodepth=128 --direct=1 --rw=randread --time_based --runtime=300  --refill_buffers --norandommap --randrepeat=0 --group_reporting --name=fio-write --size=1G --filename=/dev/vdb 
    fio-write: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
    fio-3.1
    Starting 1 process
    Jobs: 1 (f=1): [r(1)][100.0%][r=87.1MiB/s,w=0KiB/s][r=22.3k,w=0 IOPS][eta 00m:00s]
    fio-write: (groupid=0, jobs=1): err= 0: pid=22055: Wed Oct 10 22:51:54 2018
       read: IOPS=22.3k, BW=87.0MiB/s (91.2MB/s)(25.5GiB/300004msec)
        slat (usec): min=2, max=8626, avg= 7.91, stdev=11.85
        clat (usec): min=36, max=71810, avg=5735.17, stdev=1405.20
         lat (usec): min=45, max=71826, avg=5743.59, stdev=1405.30
        clat percentiles (usec):
         |  1.00th=[ 1958],  5.00th=[ 3556], 10.00th=[ 4424], 20.00th=[ 5145],
         | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866],
         | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6783], 95.00th=[ 7504],
         | 99.00th=[10290], 99.50th=[12256], 99.90th=[16712], 99.95th=[18482],
         | 99.99th=[25035]
       bw (  KiB/s): min=74872, max=93240, per=100.00%, avg=89121.48, stdev=2687.55, samples=600
       iops        : min=18718, max=23310, avg=22280.35, stdev=671.89, samples=600
      lat (usec)   : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.02%, 750=0.06%
      lat (usec)   : 1000=0.12%
      lat (msec)   : 2=0.86%, 4=6.02%, 10=91.77%, 20=1.11%, 50=0.03%
      lat (msec)   : 100=0.01%
      cpu          : usr=6.56%, sys=28.57%, ctx=3473136, majf=0, minf=160
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
         issued rwt: total=6683408,0,0, short=0,0,0, dropped=0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=128
    
    Run status group 0 (all jobs):
       READ: bw=87.0MiB/s (91.2MB/s), 87.0MiB/s-87.0MiB/s (91.2MB/s-91.2MB/s), io=25.5GiB (27.4GB), run=300004-300004msec
    
    Disk stats (read/write):
      vdb: ios=6680955/0, merge=0/0, ticks=37981396/0, in_queue=37983491, util=100.00%
    

    我们可以看到4K的数据块,随机读的最大IOPS为:23310

    • 测试4M的顺序写
    [root@host-10-0-1-36 ~]# fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=4096k -size=100G -numjobs=1 -runtime=300 -group_reporting -filename=/dev/vdb -name=Write_PPS_Testing
    Write_PPS_Testing: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=64
    fio-3.1
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=108MiB/s][r=0,w=27 IOPS][eta 00m:00s]
    Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=22098: Wed Oct 10 23:20:07 2018
      write: IOPS=28, BW=115MiB/s (120MB/s)(33.7GiB/300466msec)
        slat (usec): min=564, max=2554.1k, avg=34767.87, stdev=65067.32
        clat (msec): min=400, max=12179, avg=2188.04, stdev=1037.46
         lat (msec): min=473, max=12231, avg=2222.81, stdev=1047.45
        clat percentiles (msec):
         |  1.00th=[ 1435],  5.00th=[ 1586], 10.00th=[ 1653], 20.00th=[ 1754],
         | 30.00th=[ 1838], 40.00th=[ 1921], 50.00th=[ 2005], 60.00th=[ 2089],
         | 70.00th=[ 2198], 80.00th=[ 2333], 90.00th=[ 2534], 95.00th=[ 2802],
         | 99.00th=[ 8490], 99.50th=[ 9731], 99.90th=[11745], 99.95th=[12013],
         | 99.99th=[12147]
       bw (  KiB/s): min= 8192, max=196608, per=100.00%, avg=120954.04, stdev=31456.39, samples=580
       iops        : min=    2, max=   48, avg=29.53, stdev= 7.68, samples=580
      lat (msec)   : 500=0.02%, 750=0.03%, 1000=0.13%, 2000=49.25%, >=2000=50.56%
      cpu          : usr=1.04%, sys=2.16%, ctx=6387, majf=0, minf=29
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwt: total=0,8627,0, short=0,0,0, dropped=0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      WRITE: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=33.7GiB (36.2GB), run=300466-300466msec
    
    Disk stats (read/write):
      vdb: ios=91/77596, merge=0/0, ticks=6/37685137, in_queue=37717833, util=99.99%
    

    我们可以看到4M的数据块,顺序写的最大吞吐量为:196M

    • 测试4M的顺序读
    [root@host-10-0-1-36 ~]# fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=4096k -size=100G -numjobs=1 -runtime=300 -group_reporting -filename=/dev/vdb -name=Write_PPS_Testing
    Write_PPS_Testing: (g=0): rw=read, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=64
    fio-3.1
    Starting 1 process
    Jobs: 1 (f=1): [R(1)][100.0%][r=312MiB/s,w=0KiB/s][r=78,w=0 IOPS][eta 00m:00s]
    Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=22103: Wed Oct 10 23:26:31 2018
       read: IOPS=35, BW=142MiB/s (149MB/s)(41.6GiB/300201msec)
        slat (usec): min=469, max=95190, avg=28166.40, stdev=15966.53
        clat (msec): min=185, max=4070, avg=1772.98, stdev=551.21
         lat (msec): min=205, max=4107, avg=1801.15, stdev=558.25
        clat percentiles (msec):
         |  1.00th=[  518],  5.00th=[  634], 10.00th=[  751], 20.00th=[ 1536],
         | 30.00th=[ 1770], 40.00th=[ 1854], 50.00th=[ 1921], 60.00th=[ 1989],
         | 70.00th=[ 2039], 80.00th=[ 2140], 90.00th=[ 2299], 95.00th=[ 2433],
         | 99.00th=[ 2802], 99.50th=[ 2970], 99.90th=[ 3473], 99.95th=[ 3641],
         | 99.99th=[ 3775]
       bw (  KiB/s): min=106496, max=466944, per=99.97%, avg=145216.28, stdev=60136.24, samples=597
       iops        : min=   26, max=  114, avg=35.45, stdev=14.68, samples=597
      lat (msec)   : 250=0.10%, 500=0.66%, 750=9.21%, 1000=7.02%, 2000=46.51%
      lat (msec)   : >=2000=36.51%
      cpu          : usr=0.05%, sys=2.61%, ctx=10959, majf=0, minf=672
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued rwt: total=10646,0,0, short=0,0,0, dropped=0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=41.6GiB (44.7GB), run=300201-300201msec
    
    Disk stats (read/write):
      vdb: ios=95740/0, merge=0/0, ticks=37360788/0, in_queue=37382226, util=100.00%
    

    我们可以看到4M的数据块,顺序读的最大吞吐量为:466M

  • 相关阅读:
    C语言学习趣事_19_C参考手册连接
    2_Windows下利用批处理文件获取命令行命令帮助信息
    C语言学习趣事_FILE_TYPE
    清华大学出版社版_Windows程序设计_方敏_不足_3
    Windows程序设计零基础自学_14_Windows文件和目录操作
    3_Windows下利用批处理文件_去除C源代码中指示行号的前导数字
    随想_7_Windows_7_Visual_Studio_2008_问题
    C语言小算法_1_数值转换
    C语言学习趣事_20_Assert_Setjmp
    C语言学习趣事_20_关于数组名与指针的讨论
  • 原文地址:https://www.cnblogs.com/yuhaohao/p/9770701.html
Copyright © 2020-2023  润新知