环境准备:
两台节点,两块磁盘。
操作系统:centos7
1、创建replica volume
接着上一步,把创建分布式卷的磁盘从新利用
[root@k8s-node2 brick]# gluster volume create rv0 replica 2 hadoop4:/data/brick1/brick/ k8s-node2:/data/brick1/brick/
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: rv0: failed: /data/brick1/brick is already part of a volume
可以看到报错了。
解决办法:
所有节点都执行
# setfattr -x trusted.glusterfs.volume-id /data/brick1/brick/
# setfattr -x trusted.gfid /data/brick1/brick/
# rm -rf /data/brick1/brick/*
# rm -rf /data/brick1/brick/.glusterfs/
接下来从新创建
# gluster volume create rv0 replica 2 hadoop4:/data/brick1/brick/ k8s-node2:/data/brick1/brick/
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: rv0: success: please start the volume to access data
查看信息
[root@k8s-node2 brick]# gluster volume info rv0
Volume Name: rv0
Type: Replicate
Volume ID: 62e03ad1-94f0-4855-96ad-b0cecac97d23
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hadoop4:/data/brick1/brick
Brick2: k8s-node2:/data/brick1/brick
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
可以看到类型已经是replica
启动副本卷
# gluster volume start rv0
volume start: rv0: success
2、挂载使用
2.1 挂载
# # mount -t glusterfs hadoop4:/rv0 /gluster_client
2.2 创建文件
[root@k8s-node2 gluster_client]# touch test1
[root@k8s-node2 gluster_client]# echo 111 > test1
[root@k8s-node2 gluster_client]# echo 222 >> test1
可以发现两个节点都存储了一份同样的文件。
借鉴:
https://blog.csdn.net/weixin_33800593/article/details/92699562
https://blog.csdn.net/u012720518/article/details/105052176