• Ceph 块存储


     

    任何普通的linux主机都可以充当ceph客户机,客户机通过网络与ceph存储集群交互以存储或检索用户数据。Ceph RBD支持已经添加到linux主线内核中,从2.6.34以及以后版本开始。

    ===================在管理端192.168.20.181操作================================

    su - ceph-admin

    cd my-cluster

    创建ceph块客户端用户名和认证密钥

    ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' |tee ./ceph.client.rbd.keyring

     

    ===================在客户端192.168.20.184操作================================

    修改hosts文件:

    cat /etc/hosts

    192.168.20.181 c720181

    192.168.20.182 c720182

    192.168.20.183 c720183

    192.168.20.184 c720184

    创建存放密钥和配置文件目录:

    mkdir -p /etc/ceph

    ===================在管理端192.168.20.181操作================================

    把密钥和配置文件拷贝到客户端

    scp ceph.client.rbd.keyring /etc/ceph/ceph.conf root@192.168.20.184:/etc/ceph/

    ===================在客户端192.168.20.184操作================================

    检查客户端是否符合块设备环境要求

    [root@c720184 ~]# uname -r

    3.10.0-957.el7.x86_64                 #高于2.6.34就可以

    [root@c720184 ~]# modprobe rbd

    [root@c720184 ~]# echo $?

    0                                            #说明执行成功

    安装ceph客户端

    yum -y install ceph   (如果没有配置ceph.repo 和 epel.repo 要先配置,否则安装不成功)

    连接ceph集群:

    ceph -s --name client.rbd 

     

    (1)集群创建存储池

    默认创建块设备,会直接创建在rbd池中,但使用ceph-deploy安装(L版ceph)后,该rbd池并没有创建

    #创建池

    ===================在管理端192.168.20.181操作================================

    [ceph-admin@c720181 my-cluster]$ ceph osd lspools   #查看集群池存储池

    [ceph-admin@c720181 my-cluster]$ ceph osd pool create rbd 128   #128为place group 数量

    pool 'rbd' created

    [ceph-admin@c720181 my-cluster]$ ceph osd lspools

    1 rbd,

    (2)客户端创建块设备

    [root@c720184 ~]# rbd create rbd1 --size 10240 --name client.rbd

    [root@c720184 ~]# rbd ls -p rbd --name client.rbd

    rbd1

    [root@c720184 ~]# rbd list --name client.rbd

    rbd1

    [root@c720184 ~]# rbd --image rbd1 info --name client.rbd

    rbd image 'rbd1':

    size 10GiB in 2560 objects

    order 22 (4MiB objects)

    block_name_prefix: rbd_data.106b6b8b4567

    format: 2

    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

    flags:

    create_timestamp: Sun Aug 18 18:56:02 2019

    (3)客户端映射块设备

    [root@c720184 ~]# rbd map --image rbd1 --name client.rbd

    直接映射会报如下错误,是因为创建的块设备属性对于该内核版本不支持。

    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

     

    解决方式有如下三种:

    a.动态禁用 (建议)

    rbd feature disable rbd1 exclusive-lock object-map deep-flatten fast-diff --name client.rbd

    b.创建RBD块设备时,只启用分层特性

    rbd create rbd1 --size 10240 --image-feature layering --name client.rbd

    c.ceph配置文件中禁用

    rbd_default_features = 1

    比如这里使用动态禁用:(截图里面少了--name client.rbd,不然会报错)

            rbd feature disable rbd1 exclusive-lock object-map deep-flatten fast-diff --name client.rbd

            重新映射块设备:

     [root@c720184 ~]# rbd map --image rbd1 --name client.rbd

     

    (4)创建文件系统并挂载

    [root@c720184 ~]# fdisk -l /dev/rbd0

    Disk /dev/rbd0: 10.7 GB, 10737418240 bytes, 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

    [root@c720184 ~]# mkfs.xfs /dev/rbd0

    meta-data=/dev/rbd0              isize=512    agcount=16, agsize=163840 blks

             =                       sectsz=512   attr=2, projid32bit=1

             =                       crc=1        finobt=0, sparse=0

    data     =                       bsize=4096   blocks=2621440, imaxpct=25

             =                       sunit=1024   swidth=1024 blks

    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1

    log      =internal log           bsize=4096   blocks=2560, version=2

             =                       sectsz=512   sunit=8 blks, lazy-count=1

    realtime =none                   extsz=4096   blocks=0, rtextents=0

    [root@c720184 ~]# mkdir /mnt/ceph-disk1

    [root@c720184 ~]# mount /dev/rbd0 /mnt/ceph-disk1

    [root@c720184 ~]# df -h

    Filesystem               Size  Used Avail Use% Mounted on

    /dev/mapper/centos-root   17G  1.5G   16G   9% /

    devtmpfs                 908M     0  908M   0% /dev

    tmpfs                    920M     0  920M   0% /dev/shm

    tmpfs                    920M  8.5M  911M   1% /run

    tmpfs                    920M     0  920M   0% /sys/fs/cgroup

    /dev/sda1               1014M  145M  870M  15% /boot

    tmpfs                    184M     0  184M   0% /run/user/0

    /dev/rbd0                 10G   33M   10G   1% /mnt/ceph-disk1

    (5)写入数据测试

    [root@c720184 ~]# dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

    100+0 records in

    100+0 records out

    104857600 bytes (105 MB) copied, 1.04015 s, 101 MB/s

    [root@c720184 ~]# ll -h /mnt/ceph-disk1/

    total 100M

    -rw-r--r--. 1 root root 100M Aug 18 19:11 file1

    (6)做成服务,设置开机自动挂载

    wget -O /usr/local/bin/rbd-mount https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount

    #注意需要修改里面的配置参数

     #!/bin/bash

    # Pool name where block device image is stored
    export poolname=rbd

    # Disk image name
    export rbdimage=rbd1

    # Mounted Directory
    export mountpoint=/mnt/ceph-disk1

    # Image mount/unmount and pool are passed from the systemd service as arguments
    # Are we are mounting or unmounting
    if [ "$1" == "m" ]; then
       modprobe rbd
       rbd feature disable $rbdimage object-map fast-diff deep-flatten
       rbd map $rbdimage --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
       mkdir -p $mountpoint
       mount /dev/rbd/$poolname/$rbdimage $mountpoint
    fi
    if [ "$1" == "u" ]; then
       umount $mountpoint
       rbd unmap /dev/rbd/$poolname/$rbdimage
    fi

    #添加可执行权限

    chmod +x /usr/local/bin/rbd-mount

    wget -O /etc/systemd/system/rbd-mount.service https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount.service

     [root@c720184 ~]# systemctl daemon-reload

    [root@c720184 ~]# systemctl enable rbd-mount.service

    测试重启是否自动挂载:

    reboot -f

    df -h

    这里为了节省时间,没有重启,而是先取消挂载,启动挂载服务看是否挂载上。

    [root@c720184 ~]# umount /mnt/ceph-disk1/

    [root@c720184 ~]# systemctl start rbd-mount.service

     

     [root@client ~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/cl_-root   50G  1.8G   49G   4% /
    devtmpfs              910M     0  910M   0% /dev
    tmpfs                 920M     0  920M   0% /dev/shm
    tmpfs                 920M   17M  904M   2% /run
    tmpfs                 920M     0  920M   0% /sys/fs/cgroup
    /dev/sda1            1014M  184M  831M  19% /boot
    /dev/mapper/cl_-home  196G   33M  195G   1% /home
    tmpfs                 184M     0  184M   0% /run/user/0
    ceph-fuse              54G  1.0G   53G   2% /mnt/cephfs
    c720182:/              54G  1.0G   53G   2% /mnt/cephnfs
    /dev/rbd1              20G   33M   20G   1% /mnt/ceph-disk1

  • 相关阅读:
    nyoj67三角形面积
    hduoj1097A hard puzzle
    nyoj168房间安排
    nyoj73 比大小
    hduoj1021 Fibonacci Again
    hduoj1018 Big Number
    hduoj1108最小公倍数
    nyoj312 20岁生日
    hduoj1019 Least Common Multiple
    nyoj144小珂的苦恼
  • 原文地址:https://www.cnblogs.com/flytor/p/11380004.html
Copyright © 2020-2023  润新知