• Ceph nautilus 集群部署


    环境准备

    说明: 本次安装为简洁安装,主要用于体验 安装流程,更为深入的复杂,后续更新

    使用 ceph-deploy 工具对集群的快速初始化

    操作系统 内核版本 ceph-deploy 版本 ceph-nautilus 版本 主机名 IP 角色
    CentOS 7.5 3.10 2.0.1 ceph-14.2.8 monitor01 172.16.1.21 monitor、ceph-depoly
    CentOS 7.5 3.10 2.0.1 ceph-14.2.8 osd01 172.16.1.24 osd
    CentOS 7.5 3.10 2.0.1 ceph-14.2.8 osd02 172.16.1.25 osd
    CentOS 7.5 3.10 2.0.1 ceph-14.2.8 osd03 172.16.1.26 osd

    在三个 OSD 节点分别放置4块硬盘,本次环境为虚拟机,所以创建四块硬盘即可

    [root@osd01 ~]# ll /dev/sdb /dev/sdc /dev/sdd /dev/sde
    brw-rw---- 1 root disk 8, 16 Mar 25 13:09 /dev/sdb
    brw-rw---- 1 root disk 8, 32 Mar 25 13:09 /dev/sdc
    brw-rw---- 1 root disk 8, 48 Mar 25 13:09 /dev/sdd
    brw-rw---- 1 root disk 8, 64 Mar 25 13:09 /dev/sde
    

    Ceph 集群对时间尤为敏感,所以需要安装时间同步工具,这里使用 chrony ,所有节点都需要安装。

    yum install chrony -y
    
    grep aliyun /etc/chrony.conf 
    # server time4.aliyun.com iburst  这里修改配置文件
    
    systemctl start chronyd
    systemctl enable chronyd
    
    chronyc sources
    # 210 Number of sources = 1
    # MS Name/IP address         Stratum Poll Reach LastRx Last sample               
    # ===============================================================================
    # ^* 203.107.6.88                  2  10   377   965  +2359us[+2593us] +/-   15ms
    

    yum 源配置

    所有机器配置国内源,如下:

    [root@monitor03 ~]# cat /etc/yum.repos.d/ceph.repo 
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch/
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    
    [ceph]
    name=Ceph nautilus packages
    baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/x86_64/
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    

    所有节点配置hosts解析

    /etc/hosts 文件中添加如下内容

    172.16.1.21 monitor01
    172.16.1.24 osd01
    172.16.1.25 osd02
    172.16.1.26 osd03
    

    配置免秘钥登录

    ceph-deploy 节点配置登录所有节点的 ssh 秘钥:

    [root@monitor01 ~]# ssh-keygen -t rsa -f ~/.ssh/id_dsa -P ""
    Generating public/private rsa key pair.
    Created directory '/root/.ssh'.
    Your identification has been saved in /root/.ssh/id_dsa.
    Your public key has been saved in /root/.ssh/id_dsa.pub.
    The key fingerprint is:
    SHA256:6GBpOHlStUqsj46exNWMgdjHM80Jp+6ojeFFS4rVRuU root@monitor01
    The key's randomart image is:
    +---[RSA 2048]----+
    |    . +          |
    |...o X o         |
    |....@ E          |
    |   @== .         |
    |  B=@o. S        |
    |o.=%.o           |
    |o+ooo .          |
    |oBo              |
    |=+o              |
    +----[SHA256]-----+
    

    在使用 ssh-copy-id 命令分发公钥。

    安装ceph集群

    安装 ceph-deploy 工具

    在 ceph-deploy 管理节点安装: 不知道的看上面的表格

    yum install ceph-deploy -y
    
    ceph-deploy --version
    
    # 2.0.1
    

    所有节点安装 ceph

    需要在 所有 monitorOSD 节点执行 安装 ceph 。

    yum install ceph -y
    
    ceph -v
    
    # ceph version 14.2.8 (2d095e947a02261ce61424021bb43bd3022d35cb) nautilus (stable)
    

    安装 monitor 节点

    新建一个 monitor ,这一步主要用于生成配置文件

    [root@monitor01 ~]# cd /etc/ceph/
    [root@monitor01 ceph]# ceph-deploy new monitor01
    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new monitor01
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f612edacd70>
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f612e734170>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
    [ceph_deploy.cli][INFO  ]  mon                           : ['monitor01']
    [ceph_deploy.cli][INFO  ]  public_network                : None
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  cluster_network               : None
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.cli][INFO  ]  fsid                          : None
    [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
    [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
    [monitor01][DEBUG ] connected to host: monitor01 
    [monitor01][DEBUG ] detect platform information from remote host
    [monitor01][DEBUG ] detect machine type
    [monitor01][DEBUG ] find the location of an executable
    [monitor01][INFO  ] Running command: /usr/sbin/ip link show
    [monitor01][INFO  ] Running command: /usr/sbin/ip addr show
    [monitor01][DEBUG ] IP addresses found: [u'172.16.1.21', u'172.16.0.21']
    [ceph_deploy.new][DEBUG ] Resolving host monitor01
    [ceph_deploy.new][DEBUG ] Monitor monitor01 at 172.16.1.21
    [ceph_deploy.new][DEBUG ] Monitor initial members are ['monitor01']
    [ceph_deploy.new][DEBUG ] Monitor addrs are ['172.16.1.21']
    [ceph_deploy.new][DEBUG ] Creating a random mon key...
    [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
    [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
    

    初始化 monitor 节点,这一步主要是分发 配置文件到 monitor节点的 /etc/ceph

    [root@monitor01 ceph]# ceph-deploy mon create-initial
    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new monitor01
    ... ...
    ... ...
    [ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
    [ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpAR_kCD
    

    在 monitor 节点查看 monitor 进程

    [root@monitor01 ceph]# ps aux|grep mon
    dbus         606  0.0  0.2  58064  2316 ?        Ss   Mar25   0:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
    ceph        7039  0.2  3.6 503636 36056 ?        Ssl  19:57   0:00 /usr/bin/ceph-mon -f --cluster ceph --id monitor01 --setuser ceph --setgroup ceph
    

    查看 ceph 状态

    [root@monitor01 ceph]# ceph status
      cluster:
        id:     d5f26108-e40a-49e6-a237-505f8a3004b9
        health: HEALTH_OK       # 这里表示集群状态ok
     
      services:
        mon: 1 daemons, quorum monitor01 (age 88s)  # 可以看到一个monitor节点
        mgr: no daemons active      # 还没有配置mgr,用于dashbord
        osd: 0 osds: 0 up, 0 in     # 这里看到没有 osd 节点
     
      data:
        pools:   0 pools, 0 pgs
        objects: 0 objects, 0 B
        usage:   0 B used, 0 B / 0 B avail
        pgs:
    

    安装 OSD 节点

    下面开始初始化 OSD 节点,在 ceph-deploy 节点执行;

    需要注意: 所有 OSD 节点接入的硬盘,无需格式化和创建分区

    ceph-deploy osd create osd01 --data /dev/sdb
    ceph-deploy osd create osd01 --data /dev/sdc
    ceph-deploy osd create osd01 --data /dev/sdd
    ceph-deploy osd create osd01 --data /dev/sde
    
    ceph-deploy osd create osd02 --data /dev/sdb
    ceph-deploy osd create osd02 --data /dev/sdc
    ceph-deploy osd create osd02 --data /dev/sdd
    ceph-deploy osd create osd02 --data /dev/sde
    
    ceph-deploy osd create osd03 --data /dev/sdb
    ceph-deploy osd create osd03 --data /dev/sdc
    ceph-deploy osd create osd03 --data /dev/sdd
    ceph-deploy osd create osd03 --data /dev/sde
    

    查看 OSD01 节点启动的 OSD daemon 进程:

    [root@osd01 ~]# ps aux|grep osd
    ceph        6200  0.1  3.5 873348 35624 ?        Ssl  20:03   0:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    ceph        6669  0.3  4.1 873340 41492 ?        Ssl  20:04   0:00 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
    ceph        7136  0.3  3.6 873336 36756 ?        Ssl  20:04   0:00 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
    ceph        7603  0.4  3.8 874360 38444 ?        Ssl  20:05   0:00 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
    

    可以看到四块磁盘,启动了四个 OSD daemon 进程。

    安装ceph-mgr:

    如果不安装 mgr,在使用 ceph status 是,会提示 no active mgr

    mgr 还是安装在 monitor01 节点上。

    ceph-deploy mgr create monitor01
    

    查看

    此时再次查看 ceph 集群状态

    [root@monitor01 ceph]# ceph -s
      cluster:
        id:     d5f26108-e40a-49e6-a237-505f8a3004b9
        health: HEALTH_OK           # 集群状态OK
     
      services:
        mon: 1 daemons, quorum monitor01 (age 12m)
        mgr: monitor01(active, since 55s)                       # 因刚刚安装了mgr,这里提示正常
        osd: 12 osds: 12 up (since 3m), 12 in (since 3m)        # 这里能看到有12个OSD ,并且状态为 UP
     
      data:
        pools:   0 pools, 0 pgs
        objects: 0 objects, 0 B
        usage:   12 GiB used, 108 GiB / 120 GiB avail           # 这里是 所有OSD 的总空间
        pgs:
    
    [root@monitor01 ceph]# ceph osd tree
    ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF 
    -1       0.11755 root default                           
    -3       0.03918     host osd01                         
     0   hdd 0.00980         osd.0      up  1.00000 1.00000 
     1   hdd 0.00980         osd.1      up  1.00000 1.00000 
     2   hdd 0.00980         osd.2      up  1.00000 1.00000 
     3   hdd 0.00980         osd.3      up  1.00000 1.00000 
    -5       0.03918     host osd02                         
     4   hdd 0.00980         osd.4      up  1.00000 1.00000 
     5   hdd 0.00980         osd.5      up  1.00000 1.00000 
     6   hdd 0.00980         osd.6      up  1.00000 1.00000 
     7   hdd 0.00980         osd.7      up  1.00000 1.00000 
    -7       0.03918     host osd03                         
     8   hdd 0.00980         osd.8      up  1.00000 1.00000 
     9   hdd 0.00980         osd.9      up  1.00000 1.00000 
    10   hdd 0.00980         osd.10     up  1.00000 1.00000 
    11   hdd 0.00980         osd.11     up  1.00000 1.00000 
    

    那么到此时, ceph 底层基础已经部署完成,下面就可以开始安装 三种存储对应的 接口组件了。

    下面以安装 cephfs 为测试。

    安装 cephfs

    下面的操作,均在 ceph-deploy 节点

    元数据服务器 mds

    ceph-deploy mds create monitor01
    

    创建两个存储池

    一个cephfs至少要求两个librados存储池,一个为data,一个为metadata。

    当配置这两个存储池时,注意:

    1. 为metadata pool设置较高级别的副本级别,因为metadata的损坏可能导致整个文件系统不用
    2. 建议,metadata pool使用低延时存储,比如SSD,因为metadata会直接影响客户端的响应速度
    ceph osd pool create cephfs_data 128
    # pool 'cephfs_data' created
    
    ceph osd pool create cephfs_metadata 128
    # pool 'cephfs_metadata' created
    

    开启文件系统

    使用fs new命令enable 文件系统

    ceph fs new cephfs cephfs_metadata cephfs_data
    # new fs with metadata pool 2 and data pool 1
    

    查看信息状态:

    ceph fs ls
    # name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
    
    ceph mds stat
    # cephfs:1 {0=monitor01=up:active}
    

    此时就安装完成

    挂载 cephfs 测试

    找一台测试,这里使用osd01 节点进行测试挂载

    首先查看访问ceph的用户和秘钥:

    cat /etc/ceph/ceph.client.admin.keyring
    # [client.admin]
    # 	key = AQCyO39eTv7ZGxAAGCZaxyfy5VdxcZN9HWYcXw==
    # 	caps mds = "allow *"
    # 	caps mgr = "allow *"
    # 	caps mon = "allow *"
    # 	caps osd = "allow *"
    

    从上面的信息可以看到:

    用户:admin
    key : AQCyO39eTv7ZGxAAGCZaxyfy5VdxcZN9HWYcXw==

    挂载:

    [root@osd01 ~]# mount -t ceph monitor01:6789:/ /mnt -o name=admin,secret=AQCyO39eTv7ZGxAAGCZaxyfy5VdxcZN9HWYcXw==
    [root@osd01 ~]# df -h /mnt
    Filesystem          Size  Used Avail Use% Mounted on
    172.16.1.21:6789:/   34G     0   34G   0% /mnt
    

    问题参考链接

    遇到问题一:

    查看/var/log/message 报错如下:
    Mar 24 20:20:58 monitor01 kernel: libceph: mon0 172.16.1.21:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
    Mar 24 20:20:58 monitor01 kernel: libceph: mon0 172.16.1.21:6789 missing required protocol features
    
    解决:
    
    ceph osd crush tunables hammer
    ceph osd crush reweight-all
    
    解决文章:
    https://www.jianshu.com/p/e6aa8d4236a8
    

    遇到问题二:

    cephfs 增加多个pool是报错:
    Error EINVAL: Creation of multiple filesystems is disabled.  To enable this experimental feature, use 'ceph fs flag set enable_multiple true'
    
    解决:
    
    执行命令:
    ceph fs flag set enable_multiple true
    
    即可解决
    

    https://www.cnblogs.com/sisimi/p/7837403.html

  • 相关阅读:
    UVa 658 (Dijkstra) It's not a Bug, it's a Feature!
    CodeForces Round #288 Div.2
    UVa 540 (团体队列) Team Queue
    UVa 442 (栈) Matrix Chain Multiplication
    CodeForces Round #287 Div.2
    CodeForces Round #286 Div.2
    CodeForces Round #285 Div.2
    UVa 12096 (STL) The SetStack Computer
    UVa 101 (模拟) The Blocks Problem
    UVa 12171 (离散化 floodfill) Sculpture
  • 原文地址:https://www.cnblogs.com/winstom/p/13899941.html
Copyright © 2020-2023  润新知