• heketi简单安装配置使用


    heketi是为glusterfs集群提供restfulapi的,通过restful风格对集群进行管理控制

    前置条件

    1 安装并启动glusterd,可参考https://www.cnblogs.com/bfmq/p/9990467.html
    2 需要使用的磁盘是裸盘,没有被格式化挂载等
    3 必须两个以上节点,否则集群创建完成后也创建不出卷
    4 如果不是为了与k8s集成没必要安装heketi,直接用gfs自带的管理工具即可

    正常安装并修改配置文件启动

    [root@glusterfs-bj-ali-bgp1 ~]# yum install heketi heketi-client -y
    [root@glusterfs-bj-ali-bgp1 ~]# cat /etc/heketi/heketi.json
    {
      "_port_comment": "Heketi Server Port Number",
      "port": "8080",                                                            # 启用端口
    
      "_use_auth": "Enable JWT authorization. Please enable for deployment",
      "use_auth": false,                                                        # JWT认证是否开启
    
      "_jwt": "Private keys for access",                                        # JWT认证开启情况下配置
      "jwt": {
        "_admin": "Admin has access to all APIs",
        "admin": {
          "key": "My Secret"                                                    # 超级用户的密码,超级用户可以使用所有api
        },
        "_user": "User only has access to /volumes endpoint",
        "user": {
          "key": "My Secret"                                                    # 普通用户的密码,普通用户可以使用卷资源,即集群、节点之间的关系无法操作
        }
      },
    
      "_glusterfs_comment": "GlusterFS Configuration",
      "glusterfs": {
        "_executor_comment": [                                                    # 执行命令的方式
          "Execute plugin. Possible choices: mock, ssh",
          "mock: This setting is used for testing and development.",            # 开发者模式,测试功能用
          "      It will not send commands to any node.",
          "ssh:  This setting will notify Heketi to ssh to the nodes.",            # ssh是正常生产环境使用的
          "      It will need the values in sshexec to be configured.",
          "kubernetes: Communicate with GlusterFS containers over",                # 当gfs集群在kubernetes作为ds跑的时候使用
          "            Kubernetes exec api."
        ],
        "executor": "ssh",
    
        "_sshexec_comment": "SSH username and private key file information",    # 调用ssh时配置
        "sshexec": {
          "keyfile": "/etc/heketi/id_rsa",                                        # ssh执行用户的私钥,heketi用户需要该文件读权限
          "user": "root",                                                        # ssh执行用户,生产不用用root哦
          "port": "22",                                                            # ssh端口
          "fstab": "/etc/fstab"                                                    # 系统fstab路径
        },
    
        "_kubeexec_comment": "Kubernetes configuration",                        # 调用k8s时配置
        "kubeexec": {
          "host" :"https://kubernetes.host:8443",                                # k8s api地址端口
          "cert" : "/path/to/crt.file",                                            # k8s证书
          "insecure": false,                                                    # 是否启用不安全模式
          "user": "kubernetes username",                                        # k8s用户
          "password": "password for kubernetes user",                            # k8s密码
          "namespace": "OpenShift project or Kubernetes namespace",                # 项目所处命名空间
          "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
        },
    
        "_db_comment": "Database file name",
        "db": "/var/lib/heketi/heketi.db",                                        # heketi会有一个自己的小库,这个默认地址即可
    
        "_loglevel_comment": [
          "Set log level. Choices are:",
          "  none, critical, error, warning, info, debug",
          "Default is warning"
        ],
        "loglevel" : "warning"                                                    # 日志等级,日志会在/var/log/messages里显示
      }
    }
    [root@glusterfs-bj-ali-bgp1 ~]# systemctl enable heketi  && systemctl start heketi && systemctl status heketi
    [root@glusterfs-bj-ali-bgp1 heketi]# netstat -tpln
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3045/sshd           
    tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      59232/glusterd      
    tcp6       0      0 :::8080                 :::*                    LISTEN      60356/heketi             # 端口打开
    [root@glusterfs-bj-ali-bgp1 heketi]# curl http://127.0.0.1:8080/hello                # 测试连通性
    Hello from Heketi

    然后通过载入topology文件方式快速初始化一个集群并配置相关节点磁盘设备资源信息,注意该json文件必须是一行!

    [root@glusterfs-bj-ali-bgp1 heketi]# cat /etc/heketi/topology.json                    # 该文件一定要注意格式化后成json为一行,否则解析不了,这是很多网上文档的坑
    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "172.17.1.1"
                                ],
                                "storage": [
                                    "172.17.1.1"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vdb",
                            "/dev/vdc",
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "172.17.1.2"
                                ],
                                "storage": [
                                    "172.17.1.2"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vdd",
                            "/dev/vde",
                        ]
                    }
                ]
            }
        ]
    }
    [root@glusterfs-bj-ali-bgp1 heketi]# cat /etc/heketi/topology.json                # 这是该文件该有的样子!
    {"clusters": [{"nodes": [{"node": {"hostnames": {"manage": ["172.17.1.1"], "storage": ["172.17.1.1"]}, "zone": 1}, "devices": ["/dev/vdb", "/dev/vdc"]}, {"node": {"hostnames": {"manage": ["172.17.1.2"], "storage": ["172.17.1.2"]}, "zone": 1}, "devices": ["/dev/vdd", "/dev/vde"]}]}]}
    [root@glusterfs-bj-ali-bgp1 ~]# heketi-cli topology load --json=/etc/heketi/topology.json        # 如果开启了认证则是heketi-cli --user admin --secret admin文件里密码 topology load --json=/etc/heketi/topology.json
    Creating cluster ... ID: 5ff75a20c566d3ff520026a2bcfbd359
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node 172.17.1.1 ... ID: a778da2dfeebcb1dfd6d3ddb50ee9658
            Adding device /dev/vdb ... OK
            Adding device /dev/vdc ... OK
        Creating node 172.17.1.2 ... ID: 3e13521fdc3ff7ce1dec30a5107e9d43
            Adding device /dev/vdd ... OK
            Adding device /dev/vde ... OK

    创建卷并挂载使用

    [root@glusterfs-bj-ali-bgp1 ~]# heketi-cli volume create --size=6000 --replica=2            # 创建一个6000G的磁盘,副本数为2,时间大概2-3分钟。更多用法可以用heketi-cli volume create -h查看,当然测试也不用创建6000G
    Name: vol_adf27fe83b028ab6d7b0fde93a749d20                                    # 这个名字记下
    Size: 6000
    Volume Id: adf27fe83b028ab6d7b0fde93a749d20
    Cluster Id: 5ff75a20c566d3ff520026a2bcfbd359
    Mount: 172.17.1.1:vol_adf27fe83b028ab6d7b0fde93a749d20
    Mount Options: backup-volfile-servers=172.17.1.1
    Block: false
    Free Size: 0
    Reserved Size: 0
    Block Hosting Restriction: (none)
    Block Volumes: []
    Durability Type: replicate
    Distribute Count: 2
    Replica Count: 2
    [root@glusterfs-bj-ali-bgp1 ~]# df -h                                        # 可以看出/var/lib/heketi/mounts共挂出12T磁盘(我的两个ip是同一个机器,所以挂载全在一个机器上)
    Filesystem                                                                              Size  Used Avail Use% Mounted on
    devtmpfs                                                                                 16G     0   16G   0% /dev
    tmpfs                                                                                    16G  120K   16G   1% /dev/shm
    tmpfs                                                                                    16G  680K   16G   1% /run
    tmpfs                                                                                    16G     0   16G   0% /sys/fs/cgroup
    /dev/vda1                                                                               118G  3.0G  111G   3% /
    tmpfs                                                                                   3.2G     0  3.2G   0% /run/user/0
    /dev/mapper/vg_fd0ee85ed75d2b5c9fa6f5085b930806-brick_898f2216c03bac4f3ba17f55a9640917  3.0T   35M  3.0T   1% /var/lib/heketi/mounts/vg_fd0ee85ed75d2b5c9fa6f5085b930806/brick_898f2216c03bac4f3ba17f55a9640917
    /dev/mapper/vg_416dcbb83bb64cfad79bfaaf64649e98-brick_375140d6002ea63c8f86675469ef1ee8  3.0T   35M  3.0T   1% /var/lib/heketi/mounts/vg_416dcbb83bb64cfad79bfaaf64649e98/brick_375140d6002ea63c8f86675469ef1ee8
    /dev/mapper/vg_541a2089248e4b33f465eb3b15a55170-brick_34aea0de9fbdcf36b7af09eed538ea00  3.0T   35M  3.0T   1% /var/lib/heketi/mounts/vg_541a2089248e4b33f465eb3b15a55170/brick_34aea0de9fbdcf36b7af09eed538ea00
    /dev/mapper/vg_3d348787cb304b524fe3261c2a7ccb7d-brick_7c73c33b5a64b7af4159e21c12847a64  3.0T   35M  3.0T   1% /var/lib/heketi/mounts/vg_3d348787cb304b524fe3261c2a7ccb7d/brick_7c73c33b5a64b7af4159e21c12847a64
    
    # 换一台机器
    [root@devops-bj-ali-bgp1 ~]# yum install -y glusterfs-fuse                    # 安装客户端
    [root@devops-bj-ali-bgp1 ~]# mount -t glusterfs -o backup-volfile-servers=glusterfs-bj-ali-bgp2,log-level=WARNING glusterfs-bj-ali-bgp1:/vol_adf27fe83b028ab6d7b0fde93a749d20 /data/loki
    [root@devops-bj-ali-bgp1 ~]# df -h
    Filesystem                                                   Size  Used Avail Use% Mounted on
    ...
    glusterfs-bj-ali-bgp1:/vol_adf27fe83b028ab6d7b0fde93a749d20  5.9T   61G  5.8T   2% /data/loki
    [root@devops-bj-ali-bgp1 ~]# cd /data/
    [root@devops-bj-ali-bgp1 data]# ll loki/
    total 16
    drwxr-xr-x 8 root root 4096 Mar 16 08:00 boltdb-shipper-active
    drwxr-xr-x 3 root root 4096 Mar 12 15:32 boltdb-shipper-cache
    drwxr-xr-x 5 root root 4096 Mar 16 09:53 boltdb-shipper-compactor
    drwx------ 2 root root 4096 Mar 16 11:18 chunks
    
    # 回到原机器可以看到数据
    [root@glusterfs-bj-ali-bgp1 ~]# ll /var/lib/heketi/mounts/*/*/*
    /var/lib/heketi/mounts/vg_3d348787cb304b524fe3261c2a7ccb7d/brick_7c73c33b5a64b7af4159e21c12847a64/brick:
    total 1308
    drwxr-xr-x 8 root root    8192 Mar 16 08:00 boltdb-shipper-active
    drwxr-xr-x 3 root root      25 Mar 12 15:32 boltdb-shipper-cache
    drwxr-xr-x 5 root root      63 Mar 16 09:53 boltdb-shipper-compactor
    drwx------ 2 root root 1179648 Mar 16 11:18 chunks
    
    /var/lib/heketi/mounts/vg_416dcbb83bb64cfad79bfaaf64649e98/brick_375140d6002ea63c8f86675469ef1ee8/brick:
    total 1324
    drwxr-xr-x 8 root root    8192 Mar 16 08:00 boltdb-shipper-active
    drwxr-xr-x 3 root root      25 Mar 12 15:32 boltdb-shipper-cache
    drwxr-xr-x 5 root root      63 Mar 16 09:53 boltdb-shipper-compactor
    drwx------ 2 root root 1196032 Mar 16 11:18 chunks
    
    /var/lib/heketi/mounts/vg_541a2089248e4b33f465eb3b15a55170/brick_34aea0de9fbdcf36b7af09eed538ea00/brick:
    total 1036
    drwxr-xr-x 8 root root    117 Mar 16 08:00 boltdb-shipper-active
    drwxr-xr-x 3 root root     33 Mar 12 15:32 boltdb-shipper-cache
    drwxr-xr-x 5 root root     79 Mar 16 09:53 boltdb-shipper-compactor
    drwx------ 2 root root 909312 Mar 16 11:18 chunks
    
    /var/lib/heketi/mounts/vg_fd0ee85ed75d2b5c9fa6f5085b930806/brick_898f2216c03bac4f3ba17f55a9640917/brick:
    total 860
    drwxr-xr-x 8 root root    145 Mar 16 08:00 boltdb-shipper-active
    drwxr-xr-x 3 root root     33 Mar 12 15:32 boltdb-shipper-cache
    drwxr-xr-x 5 root root     79 Mar 16 09:53 boltdb-shipper-compactor
    drwx------ 2 root root 745472 Mar 16 11:18 chunks
    [root@glusterfs-bj-ali-bgp1 ~]# 
    [root@glusterfs-bj-ali-bgp1 ~]# heketi-cli volume create --size=1000 --replica=2            # 后期可以继续创建其他卷,再分个1000G的双副本的
    Name: vol_8be30f4b5edc2b6dee325492e7400c96
    Size: 1000
    Volume Id: 8be30f4b5edc2b6dee325492e7400c96
    Cluster Id: 5ff75a20c566d3ff520026a2bcfbd359
    Mount: 172.17.32.102:vol_8be30f4b5edc2b6dee325492e7400c96
    Mount Options: backup-volfile-servers=172.17.32.101
    Block: false
    Free Size: 0
    Reserved Size: 0
    Block Hosting Restriction: (none)
    Block Volumes: []
    Durability Type: replicate
    Distribute Count: 1
    Replica Count: 2
    [root@glusterfs-bj-ali-bgp1 ~]# df -h                                                        # 可以看出新的vg,然后去对端机器正常挂载使用即可
    Filesystem                                                                              Size  Used Avail Use% Mounted on
    devtmpfs                                                                                 16G     0   16G   0% /dev
    tmpfs                                                                                    16G  120K   16G   1% /dev/shm
    tmpfs                                                                                    16G  732K   16G   1% /run
    tmpfs                                                                                    16G     0   16G   0% /sys/fs/cgroup
    /dev/vda1                                                                               118G  3.0G  110G   3% /
    tmpfs                                                                                   3.2G     0  3.2G   0% /run/user/0
    /dev/mapper/vg_fd0ee85ed75d2b5c9fa6f5085b930806-brick_898f2216c03bac4f3ba17f55a9640917  3.0T  1.6G  3.0T   1% /var/lib/heketi/mounts/vg_fd0ee85ed75d2b5c9fa6f5085b930806/brick_898f2216c03bac4f3ba17f55a9640917
    /dev/mapper/vg_416dcbb83bb64cfad79bfaaf64649e98-brick_375140d6002ea63c8f86675469ef1ee8  3.0T  363M  3.0T   1% /var/lib/heketi/mounts/vg_416dcbb83bb64cfad79bfaaf64649e98/brick_375140d6002ea63c8f86675469ef1ee8
    /dev/mapper/vg_541a2089248e4b33f465eb3b15a55170-brick_34aea0de9fbdcf36b7af09eed538ea00  3.0T  1.7G  3.0T   1% /var/lib/heketi/mounts/vg_541a2089248e4b33f465eb3b15a55170/brick_34aea0de9fbdcf36b7af09eed538ea00
    /dev/mapper/vg_3d348787cb304b524fe3261c2a7ccb7d-brick_7c73c33b5a64b7af4159e21c12847a64  3.0T  362M  3.0T   1% /var/lib/heketi/mounts/vg_3d348787cb304b524fe3261c2a7ccb7d/brick_7c73c33b5a64b7af4159e21c12847a64
    /dev/mapper/vg_541a2089248e4b33f465eb3b15a55170-brick_d4af7bb821d0e29c9e140e067bdeff13 1000G   35M 1000G   1% /var/lib/heketi/mounts/vg_541a2089248e4b33f465eb3b15a55170/brick_d4af7bb821d0e29c9e140e067bdeff13
    /dev/mapper/vg_416dcbb83bb64cfad79bfaaf64649e98-brick_79bc81de736ff888511b2bde46678b41 1000G   35M 1000G   1% /var/lib/heketi/mounts/vg_416dcbb83bb64cfad79bfaaf64649e98/brick_79bc81de736ff888511b2bde46678b41
  • 相关阅读:
    由一次自建库迁移到阿里云RDS引发的性能问题。
    pycharm2017自建注册服务器
    linux 邮件工具利器sendEmail时效超好
    python利用smtplib和MIMETYPE发送邮件
    如何去除本地文件与svn服务器的关联
    【转】nginx提示:500 Internal Server Error错误的解决方法
    Eclipse SVN插件安装
    Weblogic用户名密码获取
    Io 异常: The Network Adapter could not establish the connection
    Oracle修改字段名、字段数据类型
  • 原文地址:https://www.cnblogs.com/bfmq/p/14542635.html
Copyright © 2020-2023  润新知