• ceph部署手册


    CentOS7.2部署Luminous版Ceph-12.2.0

    CentOS7.2上安装部署Luminous版Ceph-12.2.0。由于ceph的Luminous版本默认使用bluestore作为后端存储类型,也新增了mgr功能,所以使用ceph-deploy的1.5.38版本来部署集群、创建MON、OSD、MGR等。

    环境

    每台主机

    • CentOS Linux release 7.2.1511 (Core) Mini版
    • 两个100G的磁盘做OSD

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    [root@localhost ~]# cat /etc/redhat-release

    CentOS Linux release 7.2.1511 (Core)

    [root@localhost ~]# lsblk

    NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

    sr0              11:0    1 1024M  0 rom

    xvda            202:0    0   10G  0 disk

    ├─xvda1         202:1    0  500M  0 part /boot

    └─xvda2         202:2    0  9.5G  0 part

      ├─centos-root 253:0    0  8.5G  0 lvm  /

      └─centos-swap 253:1    0    1G  0 lvm  [SWAP]

    xvdb            202:16   0  100G  0 disk

    xvdc            202:32   0  100G  0 disk

    主机node232作为管理节点,部署ceph-deploy。三台主机配置如下

    主机

    IP

    安装组件

    node232

    192.168.217.232

    ceph-deploy、mon、osd、mgr、ntp

    node233

    192.168.217.233

    mon、osd、ntpdate

    node234

    192.168.217.234

    mon、osd、ntpdate

    设置无密登录

    Yum源配置

    下载阿里云的base源

    Wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

    下载阿里云的epel源

    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

    修改里面的系统版本为7.3.1611,当前用的CentOS7.2.1511版本的yum源已经清空了

    [root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo

    [root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo

    #[root@localhost ~]# sed -i 's/$releasever/7.3.1611/g' /etc/yum.repos.d/CentOS-Base.repo

    [ceph]
    name=ceph
    baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/x86_64/gpgcheck=0
    [ceph-noarch]
    name=cephnoarch
    baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch/
    gpgcheck=0

    yum makecache

    http://download.ceph.com/ ceph官方yum

    安装ceph

    下载ceph的相关rpm到本地

    1

    [root@node232 ~]# yum install --downloadonly --downloaddir=/tmp/ceph ceph

    在每台主机上安装ceph

    1

    [root@node232 ~]# yum localinstall -C -y --disablerepo=* /tmp/ceph/*.rpm

    安装成功,查看ceph版本

    1

    2

    [root@node232 ~]# ceph -v

    ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)

    部署ceph

    在管理节点node232上执行。

    安装ceph-deploy

    下载ceph-deploy-1.5.38

    [root@node232 ~]# yum install --downloadonly --downloaddir=/tmp/ceph-deploy/ ceph-deploy

    yum localinstall -C -y --disablerepo=* /tmp/ceph-deploy/*.rpm

    安装成功,查看ceph-deploy版本

    1

    2

    [root@node232 ~]# ceph-deploy --version

    1.5.38

    部署集群

    创建部署目录,部署集群

    [root@node232 ~]# mkdir ceph-cluster

    [root@node232 ~]# cd ceph-cluster

    [root@node232 ceph-cluster]# ceph-deploy new node232 node233 node234

    加入监控节点

    部署mon

    1

    [root@node232 ceph-cluster]# ceph-deploy mon create-initial

    初始化监控节点

    执行ceph -s会出错,是由于缺少/etc/ceph/ceph.client.admin.keyring文件

    1

    2

    3

    4

    5

    [root@node232 ceph-cluster]# ceph -s

    2017-09-13 12:12:18.772214 7f3d3fc3f700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory

    2017-09-13 12:12:18.772260 7f3d3fc3f700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication

    2017-09-13 12:12:18.772263 7f3d3fc3f700  0 librados: client.admin initialization error (2) No such file or directory

    [errno 2] error connecting to the cluster

    手工复制ceph-cluster目录下ceph.client.admin.keyring文件到/etc/ceph/目录下或者执行 ceph-deploy admin命令自动复制ceph.client.admin.keyring文件

    [root@node232 ceph-cluster]# ceph-deploy admin node232 node233 node234

    查看集群

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    [root@node232 ceph-cluster]# ceph -s

      cluster:

        id:     988e29ea-8b2c-4fa7-a808-e199f2e6a334

        health: HEALTH_OK

      services:

        mon: 3 daemons, quorum node232,node233,node234

        mgr: no daemons active

        osd: 0 osds: 0 up, 0 in

      data:

        pools:   0 pools, 0 pgs

        objects: 0 objects, 0 bytes

        usage:   0 kB used, 0 kB / 0 kB avail

        pgs:

    创建osd

    1

    [root@node232 ceph-cluster]# ceph-deploy --overwrite-conf osd prepare node232:/dev/xvdb node232:/dev/xvdc node233:/dev/xvdb node233:/dev/xvdc node234:/dev/xvdb node234:/dev/xvdc --zap-disk

    激活osd

    [root@node232 ceph-cluster]# ceph-deploy --overwrite-conf osd activate node232:/dev/xvdb1 node232:/dev/xvdc1 node233:/dev/xvdb1 node233:/dev/xvdc1 node234:/dev/xvdb1 node234:/dev/xvdc1

    查看集群

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    [root@node232 ceph-cluster]# ceph -s

      cluster:

        id:     988e29ea-8b2c-4fa7-a808-e199f2e6a334

        health: HEALTH_WARN

                no active mgr

      services:

        mon: 3 daemons, quorum node232,node233,node234

        mgr: no daemons active

        osd: 6 osds: 6 up, 6 in

      data:

        pools:   0 pools, 0 pgs

        objects: 0 objects, 0 bytes

        usage:   0 kB used, 0 kB / 0 kB avail

        pgs:

    配置mgr

    node232上创建名称为foo的mgr服务

    1

    [root@node232 ceph-cluster]# ceph-deploy mgr create node232:foo

    启用dashboard

    1

    [root@node232 ceph-cluster]# ceph mgr module enable dashboard

    通过 http://192.168.217.232:7000 访问dashboard

    dashboard的port默认为7000,可以执行ceph config-key set mgr/dashboard/server_port $PORT修改port。
    也可以执行ceph config-key set mgr/dashboard/server_addr $IP指定dashboard的访问IP。

    Ceph mds部署

    ceph-deploy --overwrite-conf mds create node1

    ceph-deploy --overwrite-conf mds create node2

    ceph-deploy --overwrite-conf mds create node3

    ceph osd pool create cephfs_data 128

    ceph osd pool create cephfs_metadata 128

    ceph fs new cephfs cephfs_metadata cephfs_data

    mkdir /mnt/mycephfs  

    cat ceph.client.admin.keyring

    mount -t ceph node1:6789:/ /mnt/mycephfs -o name=admin,secretfile=/root/admin.secret

    ceph osd pool create rbd 8

    rbd pool init rbd

    rbd create foo --size 4096 -m node1 -k /etc/ceph/ceph.client.admin.keyring

    rbd map foo --name client.admin

    rbd feature disable foo exclusive-lock, object-map, fast-diff, deep-flatten

    rbd map foo --name client.admin

    ll -h /dev/rbd0

    Ceph对象存储搭建

    ceph-deploy install --rgw node1

    ceph-deploy rgw create  node1

    curl http://node1:7480

    systemctl restart ceph-radosgw.service

    radosgw-admin user create --uid="rgwuser" --display-name="This is first rgw test user"

    yum install python-boto -y

    [root@node1 ceph-cluster]# cat s3.py

    import boto

    import boto.s3.connection

    access_key = 'L25H46XKZFY63W7B45VW'

    secret_key = 'BKil9AGVU7g0e7wUyfsmyqzqp95IWeliKQHUo41V'

    conn = boto.connect_s3(

        aws_access_key_id = access_key,

        aws_secret_access_key = secret_key,

        host = 'node1', port=7480,

        is_secure=False,

        calling_format = boto.s3.connection.OrdinaryCallingFormat(),

    )

    bucket = conn.create_bucket('my-second-s3-bucket')

    for bucket in conn.get_all_buckets():

            print "{name} {created}".format(

                    name = bucket.name,

                    created = bucket.creation_date,

    )

    radosgw-admin subuser create --uid=rgwuser --subuser=rgwuser:swift --access=full

    yum install python-setuptools -y

    easy_install pip

    pip install --upgrade setuptools

    pip install --upgrade python-swiftclient

    swift -A http://node1:7480/auth/1.0 -U rgwuser:swift -K '3beaKGWMQlqo1vuzlK3EwHQ1Ve3qNAwMrLdcrgTA' list

  • 相关阅读:
    腾讯精选50题算法【二叉搜索树的最近公共祖先】
    潜水一周,我精心整理了两个超级有用的职场生存之道
    全球用尽IPv4的一点思考
    Leetcode算法【114. 二叉树展开为链表】
    【翻译】全新16英寸MacBook Pro评测:开发人员的梦想成真
    Medium高赞系列,如何正确的在Stack Overflow提问
    Typora+PicGo+GitHub实现md自带图床效果
    SpringBoot输出日志到文件
    Mybatis用SQL做自连表查询
    IDEA实用插件推荐及使用方法详解
  • 原文地址:https://www.cnblogs.com/mylovelulu/p/9298231.html
Copyright © 2020-2023  润新知