• redis集群在线迁移


    地址规划

    主机名 ip地 端口
    redis01 10.0.0.10 6379、6380
    redis02 10.0.0.60 6379、6380
    redis03 10.0.0.61 6379、6380
    redis04 10.0.0.70 6379、6380
    redis05 10.0.0.71 6379、6380
    redis06 10.0.0.72 6379、6380

    其中前三台为老集群节点,后三台为新集群节点,本篇针对生成环境,在线迁移集群,不停机。

    集群搭建在https://www.cnblogs.com/zh-dream/p/12249767.html

    准备工作

    1 [root@redis01 module]# mkdir -p 6379/etc
    2 [root@redis01 module]# mkdir -p 6380/etc
    3 
    4 [root@redis01 module]# cp redis-5.0.0/etc/redis.conf  6379/etc/
    5 
    6 [root@redis01 module]# cp redis-5.0.0/etc/redis.conf  6380/etc/
    7 
    8 [root@redis01 module]# mkdir 6380/{data,run,logs}
    9 [root@redis01 module]# mkdir 6379/{data,run,logs}

    修改配置文件

    1 [root@redis01 module]# sed -ri -e 's@^(dir /data/module/).*@16380/data@'  -e 's@^(pidfile ).*@1/data/module/6380/run/redis_6380.pid@'  -e 's/^(port )6379/16380/'   -e 's@^(logfile "/data/module/).*@16380/logs/redis_6380.log"@'  -e 's@^# cluster-config-file nodes-6379.conf@cluster-config-file nodes-6380.conf@' 6380/etc/redis.conf 
    2 
    3 [root@redis01 module]# sed -ri -e 's@^(dir /data/module/).*@16379/data@'  -e 's@^(pidfile ).*@1/data/module/6379/run/redis_6379.pid@'  -e 's@^(logfile "/data/module/).*@16379/logs/redis_6379.log"@' -e 's@^# cluster-config-file nodes-6379.conf@cluster-config-file nodes-6379.conf@' 6379/etc/redis.conf 

    启动redis实例

     1 [root@redis01 module]# for i in 79 80;do redis-server 63$i/etc/redis.conf;done
     2 
     3 [root@redis01 module]# ss -lntp
     4 State Recv-Q Send-Q Local Address:Port Peer Address:Port 
     5 LISTEN 0 511 10.0.0.10:6379 *:* users:(("redis-server",pid=16372,fd=7))
     6 LISTEN 0 511 10.0.0.10:6380 *:* users:(("redis-server",pid=16374,fd=7))
     7 LISTEN 0 128 *:22 *:* users:(("sshd",pid=743,fd=3))
     8 LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=834,fd=13))
     9 LISTEN 0 511 10.0.0.10:16379 *:* users:(("redis-server",pid=16372,fd=10))
    10 LISTEN 0 511 10.0.0.10:16380 *:* users:(("redis-server",pid=16374,fd=10))
    11 LISTEN 0 128 :::22 :::* users:(("sshd",pid=743,fd=4))
    12 LISTEN 0 100 ::1:25 :::* users:(("master",pid=834,fd=14))

    建立集群

    先加入主节点

     1 [root@redis01 module]# redis-cli --cluster create 10.0.0.10:6379 10.0.0.60:6379 10.0.0.61:6379 
     2 >>> Performing hash slots allocation on 3 nodes...
     3 Master[0] -> Slots 0 - 5460
     4 Master[1] -> Slots 5461 - 10922
     5 Master[2] -> Slots 10923 - 16383
     6 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
     7 slots:[0-5460] (5461 slots) master
     8 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
     9 slots:[5461-10922] (5462 slots) master
    10 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    11 slots:[10923-16383] (5461 slots) master
    12 Can I set the above configuration? (type 'yes' to accept): yes
    13 >>> Nodes configuration updated
    14 >>> Assign a different config epoch to each node
    15 >>> Sending CLUSTER MEET messages to join the cluster
    16 Waiting for the cluster to join
    17 
    18 >>> Performing Cluster Check (using node 10.0.0.10:6379)
    19 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
    20 slots:[0-5460] (5461 slots) master
    21 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    22 slots:[10923-16383] (5461 slots) master
    23 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    24 slots:[5461-10922] (5462 slots) master
    25 [OK] All nodes agree about slots configuration.
    26 >>> Check for open slots...
    27 >>> Check slots coverage...
    28 [OK] All 16384 slots covered.

    在集群指定主从关系

     1 [root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.60:6380 10.0.0.10:6379
     2 >>> Adding node 10.0.0.60:6380 to cluster 10.0.0.10:6379
     3 >>> Performing Cluster Check (using node 10.0.0.10:6379)
     4 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
     5 slots:[0-5460] (5461 slots) master
     6 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
     7 slots:[10923-16383] (5461 slots) master
     8 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
     9 slots:[5461-10922] (5462 slots) master
    10 [OK] All nodes agree about slots configuration.
    11 >>> Check for open slots...
    12 >>> Check slots coverage...
    13 [OK] All 16384 slots covered.
    14 Automatically selected master 10.0.0.10:6379
    15 >>> Send CLUSTER MEET to node 10.0.0.60:6380 to make it join the cluster.
    16 Waiting for the cluster to join
    17 
    18 >>> Configure node as replica of 10.0.0.10:6379.
    19 [OK] New node added correctly.
    20 [root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.61:6380 10.0.0.60:6379
    21 >>> Adding node 10.0.0.61:6380 to cluster 10.0.0.60:6379
    22 >>> Performing Cluster Check (using node 10.0.0.60:6379)
    23 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    24 slots:[5461-10922] (5462 slots) master
    25 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
    26 slots:[0-5460] (5461 slots) master
    27 1 additional replica(s)
    28 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
    29 slots: (0 slots) slave
    30 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
    31 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    32 slots:[10923-16383] (5461 slots) master
    33 [OK] All nodes agree about slots configuration.
    34 >>> Check for open slots...
    35 >>> Check slots coverage...
    36 [OK] All 16384 slots covered.
    37 Automatically selected master 10.0.0.60:6379
    38 >>> Send CLUSTER MEET to node 10.0.0.61:6380 to make it join the cluster.
    39 Waiting for the cluster to join
    40 
    41 >>> Configure node as replica of 10.0.0.60:6379.
    42 [OK] New node added correctly.
    43 [root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.10:6380 10.0.0.61:6379
    44 >>> Adding node 10.0.0.10:6380 to cluster 10.0.0.61:6379
    45 >>> Performing Cluster Check (using node 10.0.0.61:6379)
    46 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    47 slots:[10923-16383] (5461 slots) master
    48 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
    49 slots:[0-5460] (5461 slots) master
    50 1 additional replica(s)
    51 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    52 slots:[5461-10922] (5462 slots) master
    53 1 additional replica(s)
    54 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
    55 slots: (0 slots) slave
    56 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
    57 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
    58 slots: (0 slots) slave
    59 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
    60 [OK] All nodes agree about slots configuration.
    61 >>> Check for open slots...
    62 >>> Check slots coverage...
    63 [OK] All 16384 slots covered.
    64 Automatically selected master 10.0.0.61:6379
    65 >>> Send CLUSTER MEET to node 10.0.0.10:6380 to make it join the cluster.
    66 Waiting for the cluster to join
    67 
    68 >>> Configure node as replica of 10.0.0.61:6379.
    69 [OK] New node added correctly.

    检测集群状态

     1 [root@redis01 module]# redis-cli --cluster check 10.0.0.10:6379 
     2 10.0.0.10:6379 (aca05ab1...) -> 0 keys | 5461 slots | 1 slaves.
     3 10.0.0.61:6379 (c934fb00...) -> 0 keys | 5461 slots | 1 slaves.
     4 10.0.0.60:6379 (e6fd058c...) -> 0 keys | 5462 slots | 1 slaves.
     5 [OK] 0 keys in 3 masters.
     6 0.00 keys per slot on average.
     7 >>> Performing Cluster Check (using node 10.0.0.10:6379)
     8 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
     9 slots:[0-5460] (5461 slots) master
    10 1 additional replica(s)
    11 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    12 slots:[10923-16383] (5461 slots) master
    13 1 additional replica(s)
    14 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
    15 slots: (0 slots) slave
    16 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
    17 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380
    18 slots: (0 slots) slave
    19 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
    20 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
    21 slots: (0 slots) slave
    22 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
    23 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    24 slots:[5461-10922] (5462 slots) master
    25 1 additional replica(s)
    26 [OK] All nodes agree about slots configuration.
    27 >>> Check for open slots...
    28 >>> Check slots coverage...
    29 [OK] All 16384 slots covered.

    #在集群中插入几条数据,待迁移后检测使用

    将迁移的目标节点加入集群

    添加主节点

     1 [root@redis01 module]# redis-cli --cluster add-node 10.0.0.70:6379 10.0.0.10:6379
     2 >>> Adding node 10.0.0.70:6379 to cluster 10.0.0.10:6379
     3 >>> Performing Cluster Check (using node 10.0.0.10:6379)
     4 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
     5 slots:[0-5460] (5461 slots) master
     6 1 additional replica(s)
     7 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
     8 slots:[10923-16383] (5461 slots) master
     9 1 additional replica(s)
    10 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
    11 slots: (0 slots) slave
    12 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
    13 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380
    14 slots: (0 slots) slave
    15 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
    16 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
    17 slots: (0 slots) slave
    18 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
    19 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    20 slots:[5461-10922] (5462 slots) master
    21 1 additional replica(s)
    22 [OK] All nodes agree about slots configuration.
    23 >>> Check for open slots...
    24 >>> Check slots coverage...
    25 [OK] All 16384 slots covered.
    26 >>> Send CLUSTER MEET to node 10.0.0.70:6379 to make it join the cluster.
    27 [OK] New node added correctly.
    28 [root@redis01 module]# redis-cli --cluster add-node 10.0.0.71:6379 10.0.0.10:6379 29 >>> Adding node 10.0.0.71:6379 to cluster 10.0.0.10:6379 30 >>> Performing Cluster Check (using node 10.0.0.10:6379) 31 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379 32 slots:[0-5460] (5461 slots) master 33 1 additional replica(s) 34 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379 35 slots:[10923-16383] (5461 slots) master 36 1 additional replica(s) 37 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379 38 slots: (0 slots) master 39 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380 40 slots: (0 slots) slave 41 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c 42 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380 43 slots: (0 slots) slave 44 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760 45 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380 46 slots: (0 slots) slave 47 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07 48 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379 49 slots:[5461-10922] (5462 slots) master 50 1 additional replica(s) 51 [OK] All nodes agree about slots configuration. 52 >>> Check for open slots... 53 >>> Check slots coverage... 54 [OK] All 16384 slots covered. 55 >>> Send CLUSTER MEET to node 10.0.0.71:6379 to make it join the cluster. 56 [OK] New node added correctly.
    57 [root@redis01 module]# redis-cli --cluster add-node 10.0.0.72:6379 10.0.0.10:6379 58 >>> Adding node 10.0.0.72:6379 to cluster 10.0.0.10:6379 59 >>> Performing Cluster Check (using node 10.0.0.10:6379) 60 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379 61 slots:[0-5460] (5461 slots) master 62 1 additional replica(s) 63 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379 64 slots: (0 slots) master 65 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379 66 slots:[10923-16383] (5461 slots) master 67 1 additional replica(s) 68 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379 69 slots: (0 slots) master 70 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380 71 slots: (0 slots) slave 72 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c 73 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380 74 slots: (0 slots) slave 75 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760 76 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380 77 slots: (0 slots) slave 78 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07 79 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379 80 slots:[5461-10922] (5462 slots) master 81 1 additional replica(s) 82 [OK] All nodes agree about slots configuration. 83 >>> Check for open slots... 84 >>> Check slots coverage... 85 [OK] All 16384 slots covered. 86 >>> Send CLUSTER MEET to node 10.0.0.72:6379 to make it join the cluster. 87 [OK] New node added correctly.

    添加从节点

      1 [root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.70:6380 10.0.0.71:6379
      2 >>> Adding node 10.0.0.70:6380 to cluster 10.0.0.71:6379
      3 >>> Performing Cluster Check (using node 10.0.0.71:6379)
      4 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379
      5 slots: (0 slots) master
      6 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
      7 slots: (0 slots) slave
      8 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
      9 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379
     10 slots: (0 slots) master
     11 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
     12 slots:[5461-10922] (5462 slots) master
     13 1 additional replica(s)
     14 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380
     15 slots: (0 slots) slave
     16 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
     17 M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:6379
     18 slots: (0 slots) master
     19 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
     20 slots: (0 slots) slave
     21 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
     22 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
     23 slots:[0-5460] (5461 slots) master
     24 1 additional replica(s)
     25 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
     26 slots:[10923-16383] (5461 slots) master
     27 1 additional replica(s)
     28 [OK] All nodes agree about slots configuration.
     29 >>> Check for open slots...
     30 >>> Check slots coverage...
     31 [OK] All 16384 slots covered.
     32 Automatically selected master 10.0.0.71:6379
     33 >>> Send CLUSTER MEET to node 10.0.0.70:6380 to make it join the cluster.
     34 Waiting for the cluster to join
     35 
     36 >>> Configure node as replica of 10.0.0.71:6379.
     37 [OK] New node added correctly.
    38 [root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.71:6380 10.0.0.72:6379 39 >>> Adding node 10.0.0.71:6380 to cluster 10.0.0.72:6379 40 >>> Performing Cluster Check (using node 10.0.0.72:6379) 41 M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:6379 42 slots: (0 slots) master 43 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379 44 slots:[0-5460] (5461 slots) master 45 1 additional replica(s) 46 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380 47 slots: (0 slots) slave 48 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760 49 S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:6380 50 slots: (0 slots) slave 51 replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415 52 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379 53 slots: (0 slots) master 54 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379 55 slots:[10923-16383] (5461 slots) master 56 1 additional replica(s) 57 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380 58 slots: (0 slots) slave 59 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c 60 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379 61 slots: (0 slots) master 62 1 additional replica(s) 63 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380 64 slots: (0 slots) slave 65 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07 66 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379 67 slots:[5461-10922] (5462 slots) master 68 1 additional replica(s) 69 [OK] All nodes agree about slots configuration. 70 >>> Check for open slots... 71 >>> Check slots coverage... 72 [OK] All 16384 slots covered. 73 Automatically selected master 10.0.0.72:6379 74 >>> Send CLUSTER MEET to node 10.0.0.71:6380 to make it join the cluster. 75 Waiting for the cluster to join 76 77 >>> Configure node as replica of 10.0.0.72:6379. 78 [OK] New node added correctly.
    79 [root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.72:6380 10.0.0.70:6379 80 >>> Adding node 10.0.0.72:6380 to cluster 10.0.0.70:6379 81 >>> Performing Cluster Check (using node 10.0.0.70:6379) 82 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379 83 slots: (0 slots) master 84 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379 85 slots:[0-5460] (5461 slots) master 86 1 additional replica(s) 87 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379 88 slots:[10923-16383] (5461 slots) master 89 1 additional replica(s) 90 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380 91 slots: (0 slots) slave 92 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760 93 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380 94 slots: (0 slots) slave 95 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c 96 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379 97 slots:[5461-10922] (5462 slots) master 98 1 additional replica(s) 99 S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:6380 100 slots: (0 slots) slave 101 replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415 102 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379 103 slots: (0 slots) master 104 1 additional replica(s) 105 M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:6379 106 slots: (0 slots) master 107 1 additional replica(s) 108 S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:6380 109 slots: (0 slots) slave 110 replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 111 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380 112 slots: (0 slots) slave 113 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07 114 [OK] All nodes agree about slots configuration. 115 >>> Check for open slots... 116 >>> Check slots coverage... 117 [OK] All 16384 slots covered. 118 Automatically selected master 10.0.0.70:6379 119 >>> Send CLUSTER MEET to node 10.0.0.72:6380 to make it join the cluster. 120 Waiting for the cluster to join 121 122 >>> Configure node as replica of 10.0.0.70:6379. 123 [OK] New node added correctly.

    检查新节点的槽数,新添加的主节点槽数都是0,需要重新分片

     1 [root@redis01 module]# redis-cli --cluster check 10.0.0.70:6379
     2 10.0.0.70:6379 (76206b5f...) -> 0 keys | 0 slots | 1 slaves.
     3 10.0.0.10:6379 (aca05ab1...) -> 1 keys | 5461 slots | 1 slaves.
     4 10.0.0.61:6379 (c934fb00...) -> 1 keys | 5461 slots | 1 slaves.
     5 10.0.0.60:6379 (e6fd058c...) -> 1 keys | 5462 slots | 1 slaves.
     6 10.0.0.71:6379 (fd7f797b...) -> 0 keys | 0 slots | 1 slaves.
     7 10.0.0.72:6379 (d44d3c8b...) -> 0 keys | 0 slots | 1 slaves.
     8 [OK] 3 keys in 6 masters.
     9 0.00 keys per slot on average.
    10 >>> Performing Cluster Check (using node 10.0.0.70:6379)
    11 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379
    12   slots: (0 slots) master
    13   1 additional replica(s)
    14 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
    15   slots:[0-5460] (5461 slots) master
    16   1 additional replica(s)
    17 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    18 slots:[10923-16383] (5461 slots) master
    19 1 additional replica(s)
    20 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380
    21 slots: (0 slots) slave
    22 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
    23 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
    24 slots: (0 slots) slave
    25 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
    26 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    27 slots:[5461-10922] (5462 slots) master
    28 1 additional replica(s)
    29 S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:6380
    30 slots: (0 slots) slave
    31 replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
    32 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379
    33 slots: (0 slots) master
    34 1 additional replica(s)
    35 M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:6379
    36 slots: (0 slots) master
    37 1 additional replica(s)
    38 S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:6380
    39 slots: (0 slots) slave
    40 replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
    41 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
    42 slots: (0 slots) slave
    43 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
    44 S: 54924e469a1003913f135de3116c7d53c41b5e69 10.0.0.72:6380
    45 slots: (0 slots) slave
    46 replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
    47 [OK] All nodes agree about slots configuration.
    48 >>> Check for open slots...
    49 >>> Check slots coverage...
    50 [OK] All 16384 slots covered.

    重新分片

      1 [root@redis01 module]# redis-cli --cluster reshard 10.0.0.70:6379
      2 >>> Performing Cluster Check (using node 10.0.0.70:6379)
      3 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379
      4 slots: (0 slots) master
      5 1 additional replica(s)
      6 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
      7 slots:[0-5460] (5461 slots) master
      8 1 additional replica(s)
      9 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
     10 slots:[10923-16383] (5461 slots) master
     11 1 additional replica(s)
     12 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380
     13 slots: (0 slots) slave
     14 replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
     15 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
     16 slots: (0 slots) slave
     17 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     18 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
     19 slots:[5461-10922] (5462 slots) master
     20 1 additional replica(s)
     21 S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:6380
     22 slots: (0 slots) slave
     23 replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
     24 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379
     25 slots: (0 slots) master
     26 1 additional replica(s)
     27 M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:6379
     28 slots: (0 slots) master
     29 1 additional replica(s)
     30 S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:6380
     31 slots: (0 slots) slave
     32 replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
     33 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
     34 slots: (0 slots) slave
     35 replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
     36 S: 54924e469a1003913f135de3116c7d53c41b5e69 10.0.0.72:6380
     37 slots: (0 slots) slave
     38 replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
     39 [OK] All nodes agree about slots configuration.
     40 >>> Check for open slots...
     41 >>> Check slots coverage...
     42 [OK] All 16384 slots covered.
     43 How many slots do you want to move (from 1 to 16384)? 5461
     44 What is the receiving node ID? 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
     45 Please enter all the source node IDs.
     46 Type 'all' to use all the nodes as source nodes for the hash slots.
     47 Type 'done' once you entered all the source nodes IDs.
     48 Source node #1: aca05ab1ffe0079493ad73cd045b14bd21941e07
     49 Source node #2: done
     50 
     51 Ready to move 5461 slots.
     52 Source nodes:
     53 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
     54 slots:[0-5460] (5461 slots) master
     55 1 additional replica(s)
     56 Destination node:
     57 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379
     58 slots: (0 slots) master
     59 1 additional replica(s)
     60 Resharding plan:
     61 Moving slot 0 from aca05ab1ffe0079493ad73cd045b14bd21941e07
     62 Moving slot 1 from aca05ab1ffe0079493ad73cd045b14bd21941e07
     63 Moving slot 2 from aca05ab1ffe0079493ad73cd045b14bd21941e07
     64 
     65 Do you want to proceed with the proposed reshard plan (yes/no)? yes
     66 Moving slot 0 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     67 Moving slot 1 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     68 Moving slot 2 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     69 Moving slot 3 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     70 Moving slot 4 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     71 Moving slot 5 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     72 Moving slot 6 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     73 Moving slot 7 from 10.0.0.10:6379 to 10.0.0.70:6379: 
     74 Moving slot 8 from 10.0.0.10:6379 to 10.0.0.70:6379:
     75 
     76  
     77 
     78 [root@redis01 module]# redis-cli --cluster reshard 10.0.0.71:6379
     79 
     80 [OK] All nodes agree about slots configuration.
     81 >>> Check for open slots...
     82 >>> Check slots coverage...
     83 [OK] All 16384 slots covered.
     84 How many slots do you want to move (from 1 to 16384)? 5461
     85 What is the receiving node ID? fd7f797b72b9f6c04de8d879743b2f6b508a7415
     86 Please enter all the source node IDs.
     87 Type 'all' to use all the nodes as source nodes for the hash slots.
     88 Type 'done' once you entered all the source nodes IDs.
     89 Source node #1: e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     90 Source node #2: done
     91 
     92 Moving slot 10915 from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     93 Moving slot 10916 from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     94 Moving slot 10917 from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     95 Moving slot 10918 from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     96 Moving slot 10919 from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     97 Moving slot 10920 from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     98 
     99 Do you want to proceed with the proposed reshard plan (yes/no)? yes
    100 
    101 Moving slot 10915 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    102 Moving slot 10916 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    103 Moving slot 10917 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    104 Moving slot 10918 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    105 Moving slot 10919 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    106 Moving slot 10920 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    107 Moving slot 10921 from 10.0.0.60:6379 to 10.0.0.71:6379: 
    108 [root@redis01 module]# redis-cli --cluster reshard 10.0.0.72:6379
    109 
    110 [OK] All nodes agree about slots configuration.
    111 >>> Check for open slots...
    112 >>> Check slots coverage...
    113 [OK] All 16384 slots covered.
    114 How many slots do you want to move (from 1 to 16384)? 5461
    115 What is the receiving node ID? d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
    116 Please enter all the source node IDs.
    117 Type 'all' to use all the nodes as source nodes for the hash slots.
    118 Type 'done' once you entered all the source nodes IDs.
    119 Source node #1: c934fb00e04727cbe3ebec8ec52b629df8a4c760
    120 Source node #2: done

    查看是否老集群的哈希槽都移到了新集群节点上

     1 [root@redis01 module]# redis-cli --cluster check 10.0.0.70:6379
     2 10.0.0.70:6379 (76206b5f...) -> 1 keys | 5461 slots | 2 slaves.
     3 10.0.0.10:6379 (aca05ab1...) -> 0 keys | 0 slots | 0 slaves.
     4 10.0.0.61:6379 (c934fb00...) -> 0 keys | 0 slots | 0 slaves.
     5 10.0.0.60:6379 (e6fd058c...) -> 0 keys | 1 slots | 1 slaves.
     6 10.0.0.71:6379 (fd7f797b...) -> 1 keys | 5461 slots | 1 slaves.
     7 10.0.0.72:6379 (d44d3c8b...) -> 1 keys | 5461 slots | 2 slaves.
     8 [OK] 3 keys in 6 masters.
     9 0.00 keys per slot on average.
    10 >>> Performing Cluster Check (using node 10.0.0.70:6379)
    11 M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:6379
    12 slots:[0-5460] (5461 slots) master
    13 2 additional replica(s)
    14 M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:6379
    15 slots: (0 slots) master
    16 M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:6379
    17 slots: (0 slots) master
    18 S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:6380
    19 slots: (0 slots) slave
    20 replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
    21 S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:6380
    22 slots: (0 slots) slave
    23 replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
    24 M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:6379
    25 slots:[10922] (1 slots) master
    26 1 additional replica(s)
    27 S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:6380
    28 slots: (0 slots) slave
    29 replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
    30 M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:6379
    31 slots:[5461-10921] (5461 slots) master
    32 1 additional replica(s)
    33 M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:6379
    34 slots:[10923-16383] (5461 slots) master
    35 2 additional replica(s)
    36 S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:6380
    37 slots: (0 slots) slave
    38 replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
    39 S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:6380
    40 slots: (0 slots) slave
    41 replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
    42 S: 54924e469a1003913f135de3116c7d53c41b5e69 10.0.0.72:6380
    43 slots: (0 slots) slave
    44 replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
    45 [OK] All nodes agree about slots configuration.
    46 >>> Check for open slots...
    47 >>> Check slots coverage...
    48 [OK] All 16384 slots covered.
    49 
    50  
    51 
    52 [root@redis01 module]# redis-cli --cluster info 10.0.0.70:6379
    53 10.0.0.70:6379 (76206b5f...) -> 1 keys | 5461 slots | 2 slaves.
    54 10.0.0.10:6379 (aca05ab1...) -> 0 keys | 0 slots | 0 slaves.
    55 10.0.0.61:6379 (c934fb00...) -> 0 keys | 0 slots | 0 slaves.
    56 10.0.0.60:6379 (e6fd058c...) -> 0 keys | 1 slots | 1 slaves.
    57 10.0.0.71:6379 (fd7f797b...) -> 1 keys | 5461 slots | 1 slaves.
    58 10.0.0.72:6379 (d44d3c8b...) -> 1 keys | 5461 slots | 2 slaves.
    59 [OK] 3 keys in 6 masters.
    60 0.00 keys per slot on average.

     在新节点检查之前插入的数据

    1 [root@redis01 module]# redis-cli -c -h 10.0.0.70 -p 6379
    2 10.0.0.70:6379> get name
    3 -> Redirected to slot [5798] located at 10.0.0.71:6379
    4 "tom"
    5 10.0.0.71:6379> get foo
    6 -> Redirected to slot [12182] located at 10.0.0.72:6379
    7 "bar"

    提示:上面的操作可以使用 redis-cli --cluster reshard --cluster-from <sourcenode-id> --cluster-to <destnode-id>  --cluster-slots <number of slots>  --cluster-yes

    删除原来的节点

    先删除从节点 

     1 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.10:6380 e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c
     2 >>> Removing node e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c from cluster 10.0.0.10:6380
     3 >>> Sending CLUSTER FORGET messages to the cluster...
     4 >>> SHUTDOWN the node.
     5 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.60:6380 25b3f9781fd913d2c783ab24fa2c79a74a08070b
     6 >>> Removing node 25b3f9781fd913d2c783ab24fa2c79a74a08070b from cluster 10.0.0.60:6380
     7 >>> Sending CLUSTER FORGET messages to the cluster...
     8 >>> SHUTDOWN the node.
     9 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.61:6380 9b88fbde76b12e035d71056881c8ed09ce6aeb0d
    10 >>> Removing node 9b88fbde76b12e035d71056881c8ed09ce6aeb0d from cluster 10.0.0.61:6380
    11 >>> Sending CLUSTER FORGET messages to the cluster...
    12 >>> SHUTDOWN the node.

    注意:

    1 一定不要写错ID 这个主要是以id号来删除节点 输入错了节点id 会出现如下错误信息:
    2 
    3 [ERR] Node 10.0.0.71:6380 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
    4 
    5 需要启动这个节点,修复一下就好了
    6 
    7 [root@redis05 module]# redis-server 6380/etc/redis.conf 
    8 
    9 [root@redis01 module]# redis-cli --cluster fix 10.0.0.71:6380 

    从集群中删除迁移之前的master主节点。

     1 删除master主节点时需注意下面节点:
     2 -  如果主节点有从节点,需要将从节点转移到其他主节点或提前删除从节点
     3 -  如果主节点有slot,去掉分配的slot,然后再删除主节点。
     4  
     5 
     6 除master主节点时,必须确保它上面的slot为0,即必须为空!否则可能会导致整个redis cluster集群无法工作!
     7 
     8 如果要移除的master节点不是空的,需要先用重新分片命令来把数据移到其他的节点。
     9 另外一个移除master节点的方法是先进行一次手动的失效备援,等它的slave被选举为新的master,并且它被作为一个新的slave被重新加到集群中来之后再移除它。
    10 很明显,如果你是想要减少集群中的master数量,这种做法没什么用。在这种情况下你还是需要用重新分片来移除数据后再移除它。
    11 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.60:6379 e6fd058cb888fbe014bbc93d59eaf0595b1d514c 12 >>> Removing node e6fd058cb888fbe014bbc93d59eaf0595b1d514c from cluster 10.0.0.60:6379 13 [ERR] Node 10.0.0.60:6379 is not empty! Reshard data away and try again.

    由于已经将原来的三个master主节点的slot全部抽完了,即slot现在都为0,且他们各自的slave节点也已在上面删除

    可以删除主节点

     1 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.10:6379 f2a07c9d27d6d62d40bec2bc6914fd0757e7d072
     2 >>> Removing node f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 from cluster 10.0.0.10:6379
     3 >>> Sending CLUSTER FORGET messages to the cluster...
     4 >>> SHUTDOWN the node.
     5 
     6 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.60:6379 e6fd058cb888fbe014bbc93d59eaf0595b1d514c
     7 >>> Removing node e6fd058cb888fbe014bbc93d59eaf0595b1d514c from cluster 10.0.0.60:6379
     8 >>> Sending CLUSTER FORGET messages to the cluster...
     9 >>> SHUTDOWN the node.
    10 
    11  
    12 
    13 [root@redis01 module]# redis-cli --cluster del-node 10.0.0.61:6379 c934fb00e04727cbe3ebec8ec52b629df8a4c760
    14 >>> Removing node c934fb00e04727cbe3ebec8ec52b629df8a4c760 from cluster 10.0.0.61:6379
    15 >>> Sending CLUSTER FORGET messages to the cluster...
    16 >>> SHUTDOWN the node.
  • 相关阅读:
    linux常用命令整理
    pg_sql常用查询语句整理
    python 爬取媒体文件(使用chrome代理,启动客户端,有防火墙)
    python 爬取媒体文件(无防火墙)
    python读写符号的含义
    python数据分析开发中的常用整理
    wget: 无法解析主机地址
    ## nginx 使用
    iptables防火墙
    【redis】Could not connect to Redis at 127.0.0.1:6379: Connection refused
  • 原文地址:https://www.cnblogs.com/zh-dream/p/12297547.html
Copyright © 2020-2023  润新知