在Public Cloud中,VM、Storage和Network是IaaS的三大基础。本文将介绍在Azure的VM上测试磁盘IOPS的工具和方法。
一、添加磁盘、初始化磁盘
1.添加磁盘
把相应的信息填写好后,点击OK,加载一块Disk到相应的VM。可以注意到这块Disk的IOPS是500,吞吐量是60MB/s。
2.分区和格式化
加载好后,可以在VM中看到这块Disk:
sda是系统盘,sdb是Azure VM的临时盘,之前加的一块盘是sdc,这块盘是sdd。
查看:
[root@hwcentos ~]# fdisk -cul /dev/sdd
Disk /dev/sdd: 1098.4 GB, 1098437885952 bytes 255 heads, 63 sectors/track, 133544 cylinders, total 2145386496 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
磁盘容量1TB。
[root@hwcentos ~]# fdisk -cu /dev/sdd Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xabbaf8a0. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): p Disk /dev/sdd: 1098.4 GB, 1098437885952 bytes 255 heads, 63 sectors/track, 133544 cylinders, total 2145386496 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xabbaf8a0 Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First sector (2048-2145386495, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-2145386495, default 2145386495): Using default value 2145386495 Command (m for help): p Disk /dev/sdd: 1098.4 GB, 1098437885952 bytes 255 heads, 63 sectors/track, 133544 cylinders, total 2145386496 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xabbaf8a0 Device Boot Start End Blocks Id System /dev/sdd1 2048 2145386495 1072692224 83 Linux Command (m for help): w
3.格式化硬盘
用mkfs.ext4格式化硬盘前,先查找一下帮助文件:
man mkfs.ext4
查找lazy,
lazy_itable_init[= <0 to disable, 1 to enable>] If enabled and the uninit_bg feature is enabled, the inode table will not be fully initialized by mke2fs. This speeds up filesystem initialization noticeably, but it requires the kernel to finish initializing the filesystem in the background when the filesystem is first mounted. If the option value is omitted, it defaults to 1 to enable lazy inode table initialization.
通过这个选项可以快速对磁盘进行格式化。并且在第一次挂载的时候,系统在后台把系统初始化好。
[root@hwcentos ~]# mkfs.ext4 -E lazy_itable_init /dev/sdd1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 67043328 inodes, 268173056 blocks 13408652 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 8184 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
1T的硬盘瞬间就格式化好了。
挂载并查看:
[root@hwcentos ~]# mkdir /1T [root@hwcentos ~]# mount /dev/sdd1 /1T [root@hwcentos ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 30G 2.0G 27G 8% / tmpfs 841M 0 841M 0% /dev/shm /dev/sdb1 69G 180M 66G 1% /mnt/resource /dev/sdd1 1007G 200M 956G 1% /1T
二、测试工具IOPS
在Github搜索iops,排名第一的是cxcv/iops。
这是一个python的程序。
https://raw.githubusercontent.com/cxcv/iops/master/iops
运行这个程序测试刚刚创建的盘:
[root@hwcentos ~]# ./iops.py /dev/sdd1 /dev/sdd1, 1.10 T, sectorsize=512B, #threads=32, pattern=random: 512 B blocks: 562.5 IO/s, 288.0 kB/s ( 2.3 Mbit/s) 1 kB blocks: 579.3 IO/s, 593.2 kB/s ( 4.7 Mbit/s) 2 kB blocks: 288.9 IO/s, 591.6 kB/s ( 4.7 Mbit/s) 4 kB blocks: 229.6 IO/s, 940.4 kB/s ( 7.5 Mbit/s) 8 kB blocks: 261.9 IO/s, 2.1 MB/s ( 17.2 Mbit/s) 16 kB blocks: 175.8 IO/s, 2.9 MB/s ( 23.0 Mbit/s) 32 kB blocks: 221.9 IO/s, 7.3 MB/s ( 58.2 Mbit/s) 65 kB blocks: 259.7 IO/s, 17.0 MB/s (136.1 Mbit/s) 131 kB blocks: 142.9 IO/s, 18.7 MB/s (149.9 Mbit/s) 262 kB blocks: 106.1 IO/s, 27.8 MB/s (222.5 Mbit/s) 524 kB blocks: 84.1 IO/s, 44.1 MB/s (352.6 Mbit/s) 1 MB blocks: 49.6 IO/s, 52.0 MB/s (416.1 Mbit/s) 2 MB blocks: 28.5 IO/s, 59.8 MB/s (478.2 Mbit/s) 4 MB blocks: 14.8 IO/s, 62.3 MB/s (498.3 Mbit/s) 8 MB blocks: 5.7 IO/s, 48.1 MB/s (384.6 Mbit/s)
可以看到IOPS在500左右,吞吐量在60MB/s左右。
三、fio测试磁盘性能
同样在Github上查找fio,axboe/fio就是fio工具。
同样,如果CentOS里设置了epel源,也可以直接yum安装:
[root@hwcentos yum.repos.d]# yum search fio Loaded plugins: fastestmirror, security Repository 'epel' is missing name in configuration, using id Loading mirror speeds from cached hostfile epel | 4.4 kB 00:00 epel/primary_db | 6.3 MB 00:04 epel/pkgtags | 1.2 MB 00:01 ============================================================================ N/S Matched: fio ============================================================================= dpm-rfio-server.x86_64 : DPM RFIO server gfal2-plugin-rfio.x86_64 : Provide the rfio support for gfal2 fio.x86_64 : Multithreaded IO generation tool root-io-rfio.x86_64 : Remote File input/output library for ROOT Name and summary matches only, use "search all" for everything. [root@hwcentos yum.repos.d]# yum install fio -y Loaded plugins: fastestmirror, security Repository 'epel' is missing name in configuration, using id Loading mirror speeds from cached hostfile Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fio.x86_64 0:2.0.13-1.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved =========================================================================================================================================================================== Package Arch Version Repository Size =========================================================================================================================================================================== Installing: fio x86_64 2.0.13-1.el6 epel 222 k Transaction Summary =========================================================================================================================================================================== Install 1 Package(s) Total download size: 222 k Installed size: 1.1 M Downloading Packages: fio-2.0.13-1.el6.x86_64.rpm | 222 kB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : fio-2.0.13-1.el6.x86_64 1/1 Verifying : fio-2.0.13-1.el6.x86_64 1/1 Installed: fio.x86_64 0:2.0.13-1.el6 Complete! [root@hwcentos yum.repos.d]# yum info fio Loaded plugins: fastestmirror, security Repository 'epel' is missing name in configuration, using id Loading mirror speeds from cached hostfile Installed Packages Name : fio Arch : x86_64 Version : 2.0.13 Release : 1.el6 Size : 1.1 M Repo : installed From repo : epel Summary : Multithreaded IO generation tool URL : http://git.kernel.dk/?p=fio.git;a=summary License : GPLv2 Description : fio is an I/O tool that will spawn a number of threads or processes doing : a particular type of io action as specified by the user. fio takes a : number of global parameters, each inherited by the thread unless : otherwise parameters given to them overriding that setting is given. : The typical use of fio is to write a job file matching the io load : one wants to simulate.
Yum安装的版本是2.0.13,在Github上的是2.15版本。
安装好后,配置fio的测试配置文件:
[global] ioengine=psync direct=1 thread=1 norandommap=1 randrepeat=0 runtime=60 ramp_time=6 size=1g directory=/path/to/test [read4k-rand] stonewall group_reporting bs=4k rw=randread numjobs=8 iodepth=32 [read64k-seq] stonewall group_reporting bs=64k rw=read numjobs=4 iodepth=8 [write4k-rand] stonewall group_reporting bs=4k rw=randwrite numjobs=2 iodepth=4 [write64k-seq] stonewall group_reporting bs=64k rw=write numjobs=2 iodepth=4
这个配置文件中有4个测试,分别是顺序读、随机读、顺序写、随机写。同时测试的ioengine采用的是同步模式。
具体配置文件中的参数可以通过man fio查看。
fio fio.conf > sync.txt grep iops sync.txt
可以看到不同测试场景下的测试结果
[root@hwcentos ~]# grep iops sync.txt read : io=113516KB, bw=1887.2KB/s, iops=471 , runt= 60152msec read : io=1723.8MB, bw=29416KB/s, iops=459 , runt= 60005msec write: io=109516KB, bw=1825.2KB/s, iops=456 , runt= 60004msec write: io=926336KB, bw=15409KB/s, iops=240 , runt= 60116msec
在配置文件中把测试的ioengine换成libaio:
[global]
ioengine=libaio
fio fio.conf > async.txt grep iops async.txt [root@hwcentos ~]# grep iops async.txt read : io=119896KB, bw=1981.7KB/s, iops=491 , runt= 60503msec read : io=2225.5MB, bw=37954KB/s, iops=592 , runt= 60043msec write: io=115040KB, bw=1916.8KB/s, iops=479 , runt= 60019msec write: io=1725.8MB, bw=29377KB/s, iops=458 , runt= 60155msec
iops有所增加,特别是随机写入增加比较大。