• 上机作业五 磁盘配额 2019 7 2


    1.为主机增加80G SCSI 接口硬盘

     关机

    添加一块80G 接口为SGSI的硬盘

    开机

    2.划分三个各20G的主分区
    [root@localhost ~]# fdisk /dev/sdc
    命令(输入 m 获取帮助):n
    Partition type:
    p primary (0 primary, 0 extended, 4 free)
    e extended
    Select (default p): p
    分区号 (1-4,默认 1):
    起始 扇区 (2048-167772159,默认为 2048):
    将使用默认值 2048
    Last 扇区, +扇区 or +size{K,M,G} (2048-167772159,默认为 167772159):+20G
    分区 1 已设置为 Linux 类型,大小设为 20 GiB

    命令(输入 m 获取帮助):n
    Partition type:
    p primary (1 primary, 0 extended, 3 free)
    e extended
    Select (default p):
    Using default response p
    分区号 (2-4,默认 2):
    起始 扇区 (41945088-167772159,默认为 41945088):
    将使用默认值 41945088
    Last 扇区, +扇区 or +size{K,M,G} (41945088-167772159,默认为 167772159):+20G
    分区 2 已设置为 Linux 类型,大小设为 20 GiB

    命令(输入 m 获取帮助):
    命令(输入 m 获取帮助):n
    Partition type:
    p primary (2 primary, 0 extended, 2 free)
    e extended
    Select (default p):
    Using default response p
    分区号 (3,4,默认 3):
    起始 扇区 (83888128-167772159,默认为 83888128):
    将使用默认值 83888128
    Last 扇区, +扇区 or +size{K,M,G} (83888128-167772159,默认为 167772159):+20G
    分区 3 已设置为 Linux 类型,大小设为 20 GiB

    命令(输入 m 获取帮助):w
    The partition table has been altered!

    Calling ioctl() to re-read partition table.
    正在同步磁盘。


    3.将三个主分区转换为物理卷(pvcreate),扫描系统中的物理卷
    [root@localhost ~]# pvcreate /dev/sdc[123]
    Physical volume "/dev/sdc1" successfully created
    Physical volume "/dev/sdc2" successfully created
    Physical volume "/dev/sdc3" successfully created
    [root@localhost ~]# pvscan
    PV /dev/sda2 VG centos lvm2 [39.51 GiB / 44.00 MiB free]
    PV /dev/sde2 VG gl1 lvm2 [10.00 GiB / 0 free]
    PV /dev/sde3 VG gl1 lvm2 [10.00 GiB / 0 free]
    PV /dev/sde1 VG gl1 lvm2 [10.00 GiB / 4.99 GiB free]
    PV /dev/sdc2 lvm2 [20.00 GiB]
    PV /dev/sdc1 lvm2 [20.00 GiB]
    PV /dev/sdc3 lvm2 [20.00 GiB]
    Total: 7 [129.50 GiB] / in use: 4 [69.50 GiB] / in no VG: 3 [60.00 GiB]


    4.使用两个物理卷创建卷组,名字为myvg,查看卷组大小
    [root@localhost ~]# vgcreate myvg /dev/sdc[12]
    Volume group "myvg" successfully created
    [root@localhost ~]# vgdisplay myvg
    --- Volume group ---
    VG Name myvg
    System ID
    Format lvm2
    Metadata Areas 2
    Metadata Sequence No 1
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 0
    Open LV 0
    Max PV 0
    Cur PV 2
    Act PV 2
    VG Size 39.99 GiB
    PE Size 4.00 MiB
    Total PE 10238
    Alloc PE / Size 0 / 0
    Free PE / Size 10238 / 39.99 GiB
    VG UUID 6afFRZ-0WIZ-cDnM-fkId-Miew-lG77-IPK71o

    5.创建逻辑卷mylv,大小为30G
    [root@localhost ~]# lvcreate -L +30G -n mylv myvg
    Logical volume "mylv" created.
    [root@localhost ~]# lvdisplay /dev/myvg/mylv
    --- Logical volume ---
    LV Path /dev/myvg/mylv
    LV Name mylv
    VG Name myvg
    LV UUID 4sEUaj-zVaG-yTAz-VKbc-TJZ8-PseK-wiiISk
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2019-08-01 17:01:31 +0800
    LV Status available
    # open 0
    LV Size 30.00 GiB
    Current LE 7680
    Segments 2
    Allocation inherit
    Read ahead sectors auto
    - currently set to 8192
    Block device 253:3

    6.将逻辑卷格式化成xfs文件系统,并挂载到/data目录上,创建文件测试
    [root@localhost ~]# mkdir /data
    [root@localhost ~]# mkfs.xfs /dev/myvg/mylv
    meta-data=/dev/myvg/mylv isize=256 agcount=4, agsize=1966080 blks
    = sectsz=512 attr=2, projid32bit=1
    = crc=0 finobt=0
    data = bsize=4096 blocks=7864320, imaxpct=25
    = sunit=0 swidth=0 blks
    naming =version 2 bsize=4096 ascii-ci=0 ftype=0
    log =internal log bsize=4096 blocks=3840, version=2
    = sectsz=512 sunit=0 blks, lazy-count=1
    realtime =none extsz=4096 blocks=0, rtextents=0
    [root@localhost ~]# mount /dev/myvg/mylv /data

    7.增大逻辑卷到35G
    [root@localhost ~]# lvextend -L +5G /dev/myvg/mylv
    Size of logical volume myvg/mylv changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).
    Logical volume mylv successfully resized
    [root@localhost ~]# lvdisplay /dev/myvg/mylv
    --- Logical volume ---
    LV Path /dev/myvg/mylv
    LV Name mylv
    VG Name myvg
    LV UUID 4sEUaj-zVaG-yTAz-VKbc-TJZ8-PseK-wiiISk
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2019-08-01 17:01:31 +0800
    LV Status available
    # open 1
    LV Size 35.00 GiB
    Current LE 8960
    Segments 2
    Allocation inherit
    Read ahead sectors auto
    - currently set to 8192
    Block device 253:3

    [root@localhost ~]# df -hT
    文件系统 类型 容量 已用 可用 已用% 挂载点
    /dev/mapper/centos-root xfs 38G 8.7G 29G 24% /
    devtmpfs devtmpfs 985M 0 985M 0% /dev
    tmpfs tmpfs 994M 80K 994M 1% /dev/shm
    tmpfs tmpfs 994M 8.9M 985M 1% /run
    tmpfs tmpfs 994M 0 994M 0% /sys/fs/cgroup
    /dev/sdb1 ext4 4.8G 20M 4.6G 1% /data1
    /dev/sdb2 xfs 5.0G 33M 5.0G 1% /data2
    /dev/sdb3 vfat 5.0G 4.0K 5.0G 1% /data3
    /dev/sda1 xfs 497M 107M 391M 22% /boot
    /dev/mapper/myvg-mylv xfs 30G 33M 30G 1% /data
    [root@localhost ~]# xfs_growfs /dev/myvg/mylv
    meta-data=/dev/mapper/myvg-mylv isize=256 agcount=4, agsize=1966080 blks
    = sectsz=512 attr=2, projid32bit=1
    = crc=0 finobt=0
    data = bsize=4096 blocks=7864320, imaxpct=25
    = sunit=0 swidth=0 blks
    naming =version 2 bsize=4096 ascii-ci=0 ftype=0
    log =internal bsize=4096 blocks=3840, version=2
    = sectsz=512 sunit=0 blks, lazy-count=1
    realtime =none extsz=4096 blocks=0, rtextents=0
    data blocks changed from 7864320 to 9175040
    [root@localhost ~]# df -hT
    文件系统 类型 容量 已用 可用 已用% 挂载点
    /dev/mapper/centos-root xfs 38G 8.7G 29G 24% /
    devtmpfs devtmpfs 985M 0 985M 0% /dev
    tmpfs tmpfs 994M 80K 994M 1% /dev/shm
    tmpfs tmpfs 994M 8.9M 985M 1% /run
    tmpfs tmpfs 994M 0 994M 0% /sys/fs/cgroup
    /dev/sdb1 ext4 4.8G 20M 4.6G 1% /data1
    /dev/sdb2 xfs 5.0G 33M 5.0G 1% /data2
    /dev/sdb3 vfat 5.0G 4.0K 5.0G 1% /data3
    /dev/sda1 xfs 497M 107M 391M 22% /boot
    /dev/mapper/myvg-mylv xfs 35G 33M 35G 1% /data

    8.编辑/etc/fstab文件挂载逻辑卷,并支持磁盘配额选项
    [root@localhost ~]# vim /etc/fstab
    /dev/myvg/mylv /data xfs defaults,usrquota,grpquota 0 0


    9.创建磁盘配额,crushlinux用户在/data目录下文件大小软限制为80M,硬限制为100M,
    crushlinux用户在/data目录下文件数量软限制为80个,硬限制为100个。
    [root@localhost ~]# quotacheck -vug /data
    quotacheck: Skipping /dev/mapper/myvg-mylv [/data]
    quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
    [root@localhost ~]# quotaon -uv /data
    quotaon: Enforcing user quota already on /dev/mapper/myvg-mylv
    [root@localhost ~]# edquota -u crushinux
    Disk quotas for user crushinux (uid 2004):
    Filesystem blocks soft hard inodes soft hard
    /dev/mapper/myvg-mylv 12 81920 102400 7 80 100


    10.使用touch dd 命令在/data目录下测试
    [crushinux@localhost ~]$ touch b
    [crushinux@localhost ~]$ dd if=/dev/zero of=./b bs=1M count=100
    dd: 写入"./b" 出错: 超出磁盘限额
    记录了100+0 的读入
    记录了99+0 的写出
    104837120字节(105 MB)已复制,0.172647 秒,607 MB/秒


    11.查看配额的使用情况:用户角度
    [crushinux@localhost ~]$ quota 

    Disk quotas for user a (uid 2004):
    Filesystem blocks quota limit grace files quota limit grace
    /dev/mapper/myvg-mylv
    92176* 80000 1024000 none 13 80 100

    92176* 80000 1024000 none 13 80 100

    12.查看配额的使用情况:文件系统角度
    [root@localhost tom]# repquota -auvs
    *** Report for user quotas on device /dev/mapper/myvg-mylv
    Block grace time: 7days; Inode grace time: 7days
    Space limits File limits
    User used soft hard grace used soft hard grace
    ----------------------------------------------------------------------
    root -- 0K 0K 0K 6 0 0
    crushinux -- 12K 81920K 100M 7 80 100

    *** Status for user quotas on device /dev/mapper/myvg-mylv
    Accounting: ON; Enforcement: ON
    Inode: #136 (2 blocks, 2 extents

  • 相关阅读:
    2019总结及2020计划
    蓝牙BLE连接与操作
    Android蓝牙操作
    Cannot use JSX unless the '--jsx' flag is provided.
    PyQt打包可执行文件
    Springboot项目报错【java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to java.base/java.net.URLClassLoader】
    typescript枚举字符串型不能使用函数问题
    beautifulsoap常用取节点方法
    numpy常用矩阵操作
    MYSQL 碎片查询
  • 原文地址:https://www.cnblogs.com/hfh1/p/11290845.html
Copyright © 2020-2023  润新知