• 使用ceph-deploy部署ceph集群


    一、ceph介绍:

    Ceph是一个统一的分布式存储系统,设计初衷是提供较好的性能、可靠性和可扩展性。

    Ceph项目最早起源于Sage就读博士期间的工作(最早的成果于2004年发表),并随后贡献给开源社区。在经过了数年的发展之后,目前已得到众多云计算厂商的支持并被广泛应用。RedHat及OpenStack都可与Ceph整合以支持虚拟机镜像的后端存储。

    二、Ceph特点

    1.高性能

    a.摒弃了传统的集中式存储元数据寻址的方案,采用CRUSH算法,数据分布均衡,并行度高。
    b.考虑了容灾域的隔离,能够实现各类负载的副本放置规则,例如跨机房、机架感知等。
    c.能够支持上千个存储节点的规模,支持TB到PB级的数据。

    2.高可用

    a.副本数可以灵活控制。

    b.支持故障域分隔,数据强一致性。

    c.多种故障场景自动进行修复自愈。

    d.没有单点故障,自动管理。

    3.高可扩展性

    a.去中心化。

    b.扩展灵活。

    c.随着节点增加而线性增加。

    4.特性丰富

    a.支持三种存储接口:块存储、文件存储、对象存储

    b.支持自定义接口,支持多种语言驱动

    三、ceph支持三种接口

    Object:有原生的API,而且也兼容Swift和S3的API。
    Block:支持精简配置、快照、克隆。
    File:Posix接口,支持快照。

    四、ceph核心组件

    1.monitor:一个Ceph集群需要多个Monitor组成的小集群,它们通过Paxos同步数据,用来保存OSD的元数据。

    2.OSD:OSD全称Object Storage Device,也就是负责响应客户端请求返回具体数据的进程。一个Ceph集群一般都有很多个OSD

    3.MDS:MDS全称Ceph Metadata Server,是CephFS服务依赖的元数据服务。

    4.Object:Ceph最底层的存储单元是Object对象,每个Object包含元数据和原始数据。

    5.PG:PG全称Placement Grouops,是一个逻辑的概念,一个PG包含多个OSD。引入PG这一层其实是为了更好的分配数据和定位数据。

    6.RADOS:RADOS全称Reliable Autonomic Distributed Object Store,是Ceph集群的精华,用户实现数据分配、Failover等集群操作。

    7.Libradio:Librados是Rados提供库,因为RADOS是协议很难直接访问,因此上层的RBD、RGW和CephFS都是通过librados访问的,目前提供PHP、Ruby、Java、Python、C和C++支持。

    8.CRUSH:CRUSH是Ceph使用的数据分布算法,类似一致性哈希,让数据分配到预期的地方。
    9.RBD:RBD全称RADOS block device,是Ceph对外提供的块设备服务。
    10.RGW:RGW全称RADOS gateway,是Ceph对外提供的对象存储服务,接口与S3和Swift兼容
    11.CephFS:CephFS全称Ceph File System,是Ceph对外提供的文件系统服务。
    五。ceph相关部署:

    1)基本环境
    192.168.111.169 ceph-admin ceph-admin(ceph-deploy) mds1、mon1(也可以将monit节点另放一台机器)
    192.168.111.170 ceph-node1 ceph-node1 osd1
    192.168.111.172 ceph-node2 ceph-node2 osd2
    192.168.111.173 ceph-node3 ceph-node3 osd3

    每个节点修改主机名

    # hostnamectl set-hostname ceph-admin
    # hostnamectl set-hostname ceph-node1
    # hostnamectl set-hostname ceph-node2
    # hostnamectl set-hostname ceph-node3

    每个节点绑定主机名映射

    cat >>/etc/hosts<<EOF
    192.168.111.169 ceph-admin
    192.168.111.170 ceph-node1
    192.168.111.172 ceph-node2
    192.168.111.173 ceph-node3
    EOF

    每个节点确认连通性
    # ping -c 3 ceph-admin
    # ping -c 3 ceph-node1
    # ping -c 3 ceph-node2
    # ping -c 3 ceph-node3

    每个节点关闭防火墙和selinux

    # systemctl stop firewalld && systemctl disable firewalld && setenforce 0
    # systemctl disable firewalld

    每个节点安装和配置NTP(官方推荐的是集群的所有节点全部安装并配置 NTP,需要保证各节点的系统时间一致。没有自己部署ntp服务器,就在线同步NTP)

    # yum install ntp ntpdate ntp-doc -y
    # systemctl restart ntpd && systemctl status ntpd
    # systemctl status ntpd

    每个节点准备yum源
    删除默认的源,国外的比较慢

    # yum clean all
    # mkdir /mnt/bak
    # mv /etc/yum.repos.d/* /mnt/bak/

    下载阿里云的base源和epel源

    # wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    # wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    
    添加ceph源
    # vim /etc/yum.repos.d/ceph.repo
    [ceph]
    name=ceph
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
    gpgcheck=0
    priority =1
    [ceph-noarch]
    name=cephnoarch
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
    gpgcheck=0
    priority =1
    [ceph-source]
    name=Ceph source packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
    gpgcheck=0
    priority=1

    每个节点创建cephuser用户,设置sudo权限

    # useradd -d /home/cephuser -m cephuser
    # echo "cephuser"|passwd --stdin cephuser
    # echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
    # chmod 0440 /etc/sudoers.d/cephuser
    # sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

    测试cephuser的sudo权限

    # su - cephuser
    $ sudo su -

    配置相互间的ssh信任关系
    现在ceph-admin节点上产生公私钥文件,然后将ceph-admin节点的.ssh目录拷贝给其他节点

    [root@ceph-admin ~]# su - cephuser
    [cephuser@ceph-admin ~]$ ssh-keygen -t rsa #一路回车
    [cephuser@ceph-admin ~]$ cd .ssh/
    [cephuser@ceph-admin .ssh]$ ls
    id_rsa id_rsa.pub
    [cephuser@ceph-admin .ssh]$ cp id_rsa.pub authorized_keys
    
    [cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node1:/home/cephuser/
    [cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node2:/home/cephuser/
    [cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node3:/home/cephuser/

    然后在各节点直接验证cephuser用户下的ssh相互信任关系

    $ ssh -p22 cephuser@ceph-admin
    $ ssh -p22 cephuser@ceph-node1
    $ ssh -p22 cephuser@ceph-node2
    $ ssh -p22 cephuser@ceph-node3

    2)准备磁盘(ceph-node1、ceph-node2、ceph-node3三个节点)
    测试时使用的磁盘不要太小,否则后面添加磁盘时会报错,建议磁盘大小为20G及以上。
    如下分别在三个节点挂载了一块20G的裸盘

    [root@ceph-node1 ~]# fdisk -l /dev/sdb
    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: gpt
    Disk identifier: DAFD4E71-4772-4325-B3F2-CF89EA64F34A

    格式化磁盘(注意:每一个node上面都需要此操作)

    $ sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
    $ sudo mkfs.xfs /dev/vdb -f

    查看磁盘格式(xfs格式)
    $ sudo blkid -o value -s TYPE /dev/vdb

    3)部署阶段(ceph-admin节点上使用ceph-deploy快速部署)

    [root@ceph-admin ~]# su - cephuser
    安装ceph-deploy
    [cephuser@ceph-admin ~]$ sudo yum update -y && sudo yum install ceph-deploy -y
    [cephuser@ceph-admin ~]$ sudo yum -y install ceph ceph-radosgw (注意:这两个一定要安装;否则后面安装ceph会报错)

    创建cluster目录

    [cephuser@ceph-admin ~]$ mkdir cluster
    [cephuser@ceph-admin ~]$ cd cluster/

    创建集群(后面填写monit节点的主机名,这里monit节点和管理节点是同一台机器,即ceph-admin)

    [cephuser@ceph-admin cluster]$ ceph-deploy new ceph-admin
    .........
    [ceph-admin][DEBUG ] IP addresses found: [u'192.168.111.169']
    [ceph_deploy.new][DEBUG ] Resolving host ceph-admin
    [ceph_deploy.new][DEBUG ] Monitor ceph-admin at 192.168.111.169
    [ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-admin']
    [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.111.169']
    [ceph_deploy.new][DEBUG ] Creating a random mon key...
    [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
    [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
    
    修改ceph.conf文件(注意:mon_host必须和public network 网络是同网段内!)
    [cephuser@ceph-admin cluster]$ vim ceph.conf #添加下面两行配置内容
    
    ......
    public network = 192.168.111.169/24
    osd pool default size = 3
    安装ceph(过程有点长,需要等待一段时间....)
    [cephuser@ceph-admin cluster]$ ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3
    初始化monit监控节点,并收集所有密钥
    [cephuser@ceph-admin cluster]$ ceph-deploy mon create-initial
    [cephuser@ceph-admin cluster]$ ceph-deploy gatherkeys ceph-admin

    添加OSD到集群
    检查OSD节点上所有可用的磁盘

    [cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3

    使用zap选项删除所有osd节点上的分区

    [cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb

    准备OSD(使用prepare命令)

    [cephuser@ceph-admin cluster]$ ceph-deploy osd prepare ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb

    活OSD(注意由于ceph对磁盘进行了分区,/dev/sdb磁盘分区为/dev/sdb1)

    [cephuser@ceph-admin cluster]$ ceph-deploy osd activate ceph-node1:/dev/sdb1 ceph-node2:/dev/sdb1 ceph-node3:/dev/sdb1
    
    可能会出现下面的报错:
    [ceph-node1][WARNIN] ceph_disk.main.Error: Error: /dev/vdb1 is not a directory or block device
    [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
    [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/vdb1
    
    但是这个报错没有影响ceph的部署,在三个osd节点上通过命令已显示磁盘已成功mount
    [root@ceph-node1 ~]# fdisk -l /dev/sdb
    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: gpt
    Disk identifier: DAFD4E71-4772-4325-B3F2-CF89EA64F34A
    
    
    # Start End Size Type Name
    1 10487808 41943006 15G Ceph OSD ceph data
    2 2048 10487807 5G Ceph Journal ceph journal
    [root@ceph-node1 ~]# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 40G 0 disk 
    ├─sda1 8:1 0 1G 0 part /boot
    └─sda2 8:2 0 39G 0 part 
    ├─centos-root 253:0 0 37G 0 lvm /
    └─centos-swap 253:1 0 2G 0 lvm [SWAP]
    sdb 8:16 0 20G 0 disk 
    ├─sdb1 8:17 0 15G 0 part /var/lib/ceph/osd/ceph-0 #挂在成功
    └─sdb2 8:18 0 5G 0 part
    
    [root@ceph-node2 ~]# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 40G 0 disk 
    ├─sda1 8:1 0 1G 0 part /boot
    └─sda2 8:2 0 39G 0 part 
    ├─centos-root 253:0 0 37G 0 lvm /
    └─centos-swap 253:1 0 2G 0 lvm [SWAP]
    sdb 8:16 0 20G 0 disk 
    ├─sdb1 8:17 0 15G 0 part /var/lib/ceph/osd/ceph-1
    └─sdb2 8:18 0 5G 0 part 
    sr0 11:0 1 4.3G 0 rom 
    [root@ceph-node2 ~]# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 40G 0 disk 
    ├─sda1 8:1 0 1G 0 part /boot
    └─sda2 8:2 0 39G 0 part 
    ├─centos-root 253:0 0 37G 0 lvm /
    └─centos-swap 253:1 0 2G 0 lvm [SWAP]
    sdb 8:16 0 20G 0 disk 
    ├─sdb1 8:17 0 15G 0 part /var/lib/ceph/osd/ceph-1 #挂在成功
    └─sdb2 8:18 0 5G 0 part 
    sr0 11:0 1 4.3G 0 rom
    
    [root@ceph-node3 ~]# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 40G 0 disk 
    ├─sda1 8:1 0 1G 0 part /boot
    └─sda2 8:2 0 39G 0 part 
    ├─centos-root 253:0 0 37G 0 lvm /
    └─centos-swap 253:1 0 2G 0 lvm [SWAP]
    sdb 8:16 0 20G 0 disk 
    ├─sdb1 8:17 0 15G 0 part /var/lib/ceph/osd/ceph-2 #挂在成功
    └─sdb2 8:18 0 5G 0 part 
    sr0 11:0 1 4.3G 0 rom

    查看OSD:

    [cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
    ........
    [ceph-node1][DEBUG ] /dev/sdb2 ceph journal, for /dev/sdb1 #如下显示这两个分区,则表示成功了
    [ceph-node1][DEBUG ] /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
    ........
    
    ........
    [ceph-node2][DEBUG ] /dev/sdb2 ceph journal, for /dev/sdb1
    [ceph-node2][DEBUG ] /dev/sdb1 ceph data, active, cluster ceph, osd.1, journal /dev/sdb2
    ........
    
    ........
    [ceph-node3][DEBUG ] /dev/sdb2 ceph journal, for /dev/sdb1
    [ceph-node3][DEBUG ] /dev/sdb1 ceph data, active, cluster ceph, osd.2, journal /dev/sdb2
    ........
    
    用ceph-deploy把配置文件和admin密钥拷贝到管理节点和Ceph节点,这样你每次执行Ceph命令行时就无需指定monit节点地址
    和ceph.client.admin.keyring了
    [cephuser@ceph-admin cluster]$ ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3

    修改密钥权限

    [cephuser@ceph-admin cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

    检查ceph状态

    [cephuser@ceph-admin cluster]$ sudo ceph health
    HEALTH_OK
    [cephuser@ceph-admin cluster]$ sudo ceph -s
    cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
    health HEALTH_OK
    monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
    election epoch 3, quorum 0 ceph-admin
    osdmap e14: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
    pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
    100 MB used, 45946 MB / 46046 MB avail
    64 active+clean

    查看ceph osd运行状态

    [cephuser@ceph-admin ~]$ ceph osd stat
    osdmap e19: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds

    查看osd的目录树

    [cephuser@ceph-admin ~]$ ceph osd tree
    ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 0.04376 root default 
    -2 0.01459 host ceph-node1 
    0 0.01459 osd.0 up 1.00000 1.00000
    -3 0.01459 host ceph-node2 
    1 0.01459 osd.1 up 1.00000 1.00000
    -4 0.01459 host ceph-node3 
    2 0.01459 osd.2 up 1.00000 1.00000

    查看monit监控节点的服务情况

    [cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mon@ceph-admin
    [cephuser@ceph-admin cluster]$ ps -ef|grep ceph|grep 'cluster'
    ceph 28190 1 0 11:44 ? 00:00:01 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
    
    分别查看下ceph-node1、ceph-node2、ceph-node3三个节点的osd服务情况,发现已经在启动中。
    [cephuser@ceph-node1 ~]$ sudo systemctl status ceph-osd@0.service #启动是start、重启是restart
    [cephuser@ceph-node1 ~]$ sudo ps -ef|grep ceph|grep "cluster"
    ceph 28749 1 0 11:44 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    cephuser 29197 29051 0 11:54 pts/2 00:00:00 grep --color=auto cluster
    
    
    [cephuser@ceph-node2 ~]$ sudo systemctl status ceph-osd@1.service
    [cephuser@ceph-node2 ~]$ sudo ps -ef|grep ceph|grep "cluster"
    ceph 28749 1 0 11:44 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    cephuser 29197 29051 0 11:54 pts/2 00:00:00 grep --color=auto cluster
    
    [cephuser@ceph-node3 ~]$ sudo systemctl status ceph-osd@2.service
    [cephuser@ceph-node3 ~]$ sudo ps -ef|grep ceph|grep "cluster"
    ceph 28749 1 0 11:44 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    cephuser 29197 29051 0 11:54 pts/2 00:00:00 grep --color=auto cluster

    4)创建文件系统

    先查看管理节点状态,默认是没有管理节点的。

    [cephuser@ceph-admin ~]$ ceph mds stat
    e1:

    创建管理节点(ceph-admin作为管理节点)。
    需要注意:如果不创建mds管理节点,client客户端将不能正常挂载到ceph集群!!

    [cephuser@ceph-admin ~]$ pwd
    /home/cephuser
    [cephuser@ceph-admin ~]$ cd cluster/
    [cephuser@ceph-admin cluster]$ ceph-deploy mds create ceph-admin

    再次查看管理节点状态,发现已经在启动中

    [cephuser@ceph-admin cluster]$ ceph mds stat
    e2:, 1 up:standby
    
    cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mds@ceph-admin
    [cephuser@ceph-admin cluster]$ ps -ef|grep cluster|grep ceph-mds
    ceph 29093 1 0 12:46 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph

    创建pool,pool是ceph存储数据时的逻辑分区,它起到namespace的作用

    [cephuser@ceph-admin cluster]$ ceph osd lspools #先查看pool
    0 rbd,

    新创建的ceph集群只有rdb一个pool。这时需要创建一个新的pool

    [cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_data 10 #后面的数字是PG的数量
    pool 'cephfs_data' created
    
    [cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_metadata 10 #创建pool的元数据
    pool 'cephfs_metadata' created
    
    [cephuser@ceph-admin cluster]$ ceph fs new myceph cephfs_metadata cephfs_data
    new fs with metadata pool 2 and data pool 1

    再次查看pool状态

    [cephuser@ceph-admin cluster]$ ceph osd lspools
    0 rbd,1 cephfs_data,2 cephfs_metadata,

    检查mds管理节点状态

    [cephuser@ceph-admin cluster]$ ceph mds stat
    e5: 1/1/1 up {0=ceph-admin=up:active}

    查看ceph集群状态

    [cephuser@ceph-admin cluster]$ sudo ceph -s
    cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
    health HEALTH_OK
    monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
    election epoch 3, quorum 0 ceph-admin
    fsmap e5: 1/1/1 up {0=ceph-admin=up:active} #多了此行状态
    osdmap e19: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
    pgmap v48: 84 pgs, 3 pools, 2068 bytes data, 20 objects
    101 MB used, 45945 MB / 46046 MB avail
    84 active+clean

    查看ceph集群端口

    [cephuser@ceph-admin cluster]$ sudo lsof -i:6789
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    ceph-mon 54344 ceph 10u IPv4 87460 0t0 TCP ceph-admin:smc-https (LISTEN)
    ceph-mon 54344 ceph 19u IPv4 89724 0t0 TCP ceph-admin:smc-https->ceph-node1:42196 (ESTABLISHED)
    ceph-mon 54344 ceph 20u IPv4 89747 0t0 TCP ceph-admin:smc-https->ceph-node2:35390 (ESTABLISHED)
    ceph-mon 54344 ceph 21u IPv4 89801 0t0 TCP ceph-admin:smc-https->ceph-node3:56818 (ESTABLISHED)
    ceph-mon 54344 ceph 22u IPv4 90813 0t0 TCP ceph-admin:smc-https->ceph-admin:42070 (ESTABLISHED)
    ceph-mds 55257 ceph 8u IPv4 90812 0t0 TCP ceph-admin:42070->ceph-admin:smc-https (ESTABLISHED

    5)client端挂载ceph存储有两种方式:内核kernal方式和fuse方式。(采用fuse方式)

    安装ceph-fuse(这里的客户机是centos7系统)
    yum install -y ceph-fuse
    yum install -y ceph (注意:需要创建ceph的镜像仓库,前面有说明这里省略)
    创建挂载目录
    (目录可以自己定义)
    [root@localhost cephfs]# mkdir /cephfs

    复制配置文件

    将ceph配置文件ceph.conf从管理节点copy到client节点(192.168.111.169为管理节点)
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.111.169:/etc/ceph/ceph.conf /etc/ceph/
    或者
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.111.169:/home/cephuser/cluster/ceph.conf /etc/ceph/ #两个路径下的文件内容一样

    复制密钥:

    将ceph的ceph.client.admin.keyring从管理节点copy到client节点
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.111.169:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
    或者
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.111.169:/home/cephuser/cluster/ceph.client.admin.keyring /etc/ceph/

    查看ceph授权

    [root@localhost cephfs]# ceph auth list
    installed auth entries:
    
    mds.ceph-admin
    key: AQBC99de+2vvJxAAkTIeWDwg6Swy6ZXFWPXSQA==
    caps: [mds] allow
    caps: [mon] allow profile mds
    caps: [osd] allow rwx
    osd.0
    key: AQC39dde9KIoGRAA3zH8vLUDQAehO2otptUi+g==
    caps: [mon] allow profile osd
    caps: [osd] allow *
    osd.1
    key: AQDB9ddevzfzARAADHvV8zEKrj4i16KaNLNpaA==
    caps: [mon] allow profile osd
    caps: [osd] allow *
    osd.2
    key: AQDL9ddeqFDQABAAS3YsdBlI1h/65PblGdxQIw==
    caps: [mon] allow profile osd
    caps: [osd] allow *
    client.admin
    key: AQAs89depA23NRAA8yEg0GfHNC/uhKU9jsgp6Q==
    caps: [mds] allow *
    caps: [mon] allow *
    caps: [osd] allow *
    client.bootstrap-mds
    key: AQAt89deGBojEhAAQnJZJF6t2NWzbHqF+90uTg==
    caps: [mon] allow profile bootstrap-mds
    client.bootstrap-mgr
    key: AQAv89des+hAKRAA7j+g8C5dmT3FmwkXIZxo6A==
    caps: [mon] allow profile bootstrap-mgr
    client.bootstrap-osd
    key: AQAt89deWm+XARAAXoLvLk+wLuFuW0kxRY71Ug==
    caps: [mon] allow profile bootstrap-osd
    client.bootstrap-rgw
    key: AQAt89deUmHoCRAAM673iqqNXnl2CWfEiqjXfA==
    caps: [mon] allow profile bootstrap-rgw

    将ceph集群存储挂载到客户机的/cephfs目录下

    [root@centos6-02 ~]# ceph-fuse -m 192.168.111.169:6789 /cephfs
    2018-06-06 14:28:54.149796 7f8d5c256760 -1 init, newargv = 0x4273580 newargc=11
    ceph-fuse[16107]: starting ceph client
    ceph-fuse[16107]: starting fuse 
    [root@localhost cephfs]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/centos-root 37G 11G 27G 29% /
    devtmpfs 898M 0 898M 0% /dev
    tmpfs 910M 0 910M 0% /dev/shm
    tmpfs 910M 9.9M 900M 2% /run
    tmpfs 910M 0 910M 0% /sys/fs/cgroup
    /dev/sda1 1014M 146M 869M 15% /boot
    overlay 37G 11G 27G 29% /var/lib/docker/overlay2/c3ac65ba11100c5858359d15edc307d3b455afb08175663914802a072a4d0b09/merged
    overlay 37G 11G 27G 29% /var/lib/docker/overlay2/9751f3631e8d12b0703a29c4a3e5bac771c1ca71980f7d6a6b39c6cf3eb71784/merged
    overlay 37G 11G 27G 29% /var/lib/docker/overlay2/7c2135cd02ab24d6e286880c27cb3c49f34d9f441a3ffa28fd21bacf486244ab/merged
    overlay 37G 11G 27G 29% /var/lib/docker/overlay2/1993a1b68f0b167b1fc497a064e9706868bde205431e597e203e2618abfeddff/merged
    overlay 37G 11G 27G 29% /var/lib/docker/overlay2/b6f141003953ca7f4262fa6cbbf69b4d3a4d71f9d38909339b285882d916b58e/merged
    shm 64M 0 64M 0% /var/lib/docker/containers/0a52f68a5ffc5b949d1a840fff812d95117964495d71a644506eb0803f5659c2/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/62e1c7ecf7a15a1ab1e38450b1c944caafac3f23975d4ac88c60e1548f7512ef/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/c29123fffc4215165a89b63aec8159df90063a307edbcdf0af23b604c7ccb924/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/04394b89ab39ae42de6020e37ba751aef42064cb78402b16459594d2c15f4160/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/80ba82504c2ef0cacfa0797d679a36ec946adad524bc498e921521a395ed81ca/mounts/shm
    tmpfs 182M 0 182M 0% /run/user/0
    ceph-fuse 45G 324M 45G 1% /cephfs
    overlay 37G 11G 27G 29% /var/lib/docker/overlay2/a4cfb7152ca48da3af7558425d91484952a049f12f7c79e0febb131e5bafda65/merged
    shm 64M 0 64M 0% /var/lib/docker/containers/be0bfff9927041ad68f9427e7566160eae92f38190eb08c3ca69b20c45623d3a/mounts/shm

    由上面可知,已经成功挂载了ceph存储,三个osd节点,每个节点有15G(在节点上通过"lsblk"命令可以查看ceph data分区大小),一共45G!
    取消ceph存储的挂载

    [root@centos6-02 ~]# umount /cephfs

    优化:  
    PG Number
    PG和PGP数量一定要根据OSD的数量进行调整,计算公式如下,但是最后算出的结果一定要接近或者等于一个2的指数。

    Total PGs = (Total_number_of_OSD * 100) / max_replication_count
    例如15个OSD,副本数为3的情况下,根据公式计算的结果应该为500,最接近512,所以需要设定该pool(volumes)的pg_num和pgp_num都为512.

    ceph osd pool set volumes pg_num 512
    ceph osd pool set volumes pgp_num 512

  • 相关阅读:
    node中__dirname、__filename、process.cwd()、process.chdir()表示的路径
    formidable模块的使用
    对象函数的readFileSyc类
    nodejs的事件驱动理解
    书籍类
    Cookie的弊端
    Codeforces 1015 E(模拟)
    Codeforces 1015D(贪心)
    牛客网暑期ACM多校训练营(第五场)I(树状数组)
    2018牛客暑假多校第五场 A(二分答案)
  • 原文地址:https://www.cnblogs.com/abner123/p/13154691.html
Copyright © 2020-2023  润新知