主从复制
主机数据更新后根据配置和策略,自动同步到备机的master/slaver机制,master以写为主,slave以读为主
用途
能做读写分离,容灾恢复
操作
配从库不配主库
从库配置
slaveof 主库ip 主库port,每次与master断开之后,都需要从新连接,除非配置redis.conf文件
Info replication
修改配置文件细节操作(准备操作)
- 拷贝多个redis.conf文件
- 开启daemonzie yes
- Pid文件名字
- 指定端口
- Log文件名字
- dump.rdb名字
三种策略(本次是准备了一台主机与三台备机,本地开的6379,6380,6381端口)
一主二仆:
- init
- 一个master两个slave
- 主从存在的问题演示
1 # 一主二仆: 2 # 6379端口 master主机 3 4 [root@localhost myredis]# ls 5 redis79.conf redis81.conf 6 redis80.conf 7 [root@localhost myredis]# redis-server redis79.conf 8 [root@localhost myredis]# redis-cli -p 6379 9 127.0.0.1:6379> keys * 10 (empty list or set) 11 127.0.0.1:6379> info replication 12 # Replication 13 role:master 14 connected_slaves:0 15 master_repl_offset:0 16 repl_backlog_active:0 17 repl_backlog_size:1048576 18 repl_backlog_first_byte_offset:0 19 repl_backlog_histlen:0 20 127.0.0.1:6379> set k1 v1 21 OK 22 127.0.0.1:6379> set k2 v2 23 OK 24 127.0.0.1:6379> set k3 v3 25 OK 26 127.0.0.1:6379> get k3 27 "v3" 28 127.0.0.1:6379> keys * 29 1) "k2" 30 2) "k1" 31 3) "k3" 32 127.0.0.1:6379> set k4 v4 33 OK 34 127.0.0.1:6379> info replication 35 # Replication 36 role:master 37 connected_slaves:2 38 slave0:ip=127.0.0.1,port=6380,state=online,offset=445,lag=1 39 slave1:ip=127.0.0.1,port=6381,state=online,offset=445,lag=0 40 master_repl_offset:445 41 repl_backlog_active:1 42 repl_backlog_size:1048576 43 repl_backlog_first_byte_offset:2 44 repl_backlog_histlen:444 45 127.0.0.1:6379> set k5 v5 46 OK 47 127.0.0.1:6379> set k6 v6 48 OK 49 127.0.0.1:6379> SHUTDOWN 50 not connected> exit 51 [root@localhost myredis]# redis-server redis79.conf 52 [root@localhost myredis]# redis-cli -p 6379 53 127.0.0.1:6379> keys * 54 1) "k3" 55 2) "k6" 56 3) "k1" 57 4) "k5" 58 5) "k4" 59 6) "k2" 60 127.0.0.1:6379> set k7 v7 61 OK 62 127.0.0.1:6379> get k7 63 "v7" 64 127.0.0.1:6379> info replication 65 # Replication 66 role:master 67 connected_slaves:2 68 slave0:ip=127.0.0.1,port=6381,state=online,offset=165,lag=0 69 slave1:ip=127.0.0.1,port=6380,state=online,offset=165,lag=0 70 master_repl_offset:165 71 repl_backlog_active:1 72 repl_backlog_size:1048576 73 repl_backlog_first_byte_offset:2 74 repl_backlog_histlen:164 75 127.0.0.1:6379> set k8 v8 76 OK 77 127.0.0.1:6379> info replication 78 # Replication 79 role:master 80 connected_slaves:1 81 slave0:ip=127.0.0.1,port=6381,state=online,offset=642,lag=0 82 master_repl_offset:642 83 repl_backlog_active:1 84 repl_backlog_size:1048576 85 repl_backlog_first_byte_offset:2 86 repl_backlog_histlen:641 87 127.0.0.1:6379> 88 89 # 6380端口 从机slave 90 [root@localhost myredis]# ls 91 redis79.conf redis80.conf redis81.conf 92 [root@localhost myredis]# redis-server redis80.conf 93 [root@localhost myredis]# redis-cli -p 6380 94 127.0.0.1:6380> keys * 95 (empty list or set) 96 127.0.0.1:6380> info replication 97 # Replication 98 role:master 99 connected_slaves:0 100 master_repl_offset:0 101 repl_backlog_active:0 102 repl_backlog_size:1048576 103 repl_backlog_first_byte_offset:0 104 repl_backlog_histlen:0 105 127.0.0.1:6380> SLAVEOF 127.0.0.1 6379 106 OK 107 127.0.0.1:6380> get k4 108 "v4" 109 127.0.0.1:6380> get k1 110 "v1" 111 127.0.0.1:6380> info replication 112 # Replication 113 role:slave 114 master_host:127.0.0.1 115 master_port:6379 116 master_link_status:up 117 master_last_io_seconds_ago:3 118 master_sync_in_progress:0 119 slave_repl_offset:557 120 slave_priority:100 121 slave_read_only:1 122 connected_slaves:0 123 master_repl_offset:0 124 repl_backlog_active:0 125 repl_backlog_size:1048576 126 repl_backlog_first_byte_offset:0 127 repl_backlog_histlen:0 128 127.0.0.1:6380> get k5 129 "v5" 130 127.0.0.1:6380> set k6 v66 131 (error) READONLY You can't write against a read only slave. 132 127.0.0.1:6380> keys * 133 1) "k6" 134 2) "k1" 135 3) "k5" 136 4) "k4" 137 5) "k2" 138 6) "k3" 139 127.0.0.1:6380> info replication 140 # Replication 141 role:slave 142 master_host:127.0.0.1 143 master_port:6379 144 master_link_status:down 145 master_last_io_seconds_ago:-1 146 master_sync_in_progress:0 147 slave_repl_offset:1525 148 master_link_down_since_seconds:30 149 slave_priority:100 150 slave_read_only:1 151 connected_slaves:0 152 master_repl_offset:0 153 repl_backlog_active:0 154 repl_backlog_size:1048576 155 repl_backlog_first_byte_offset:0 156 repl_backlog_histlen:0 157 127.0.0.1:6380> get k7 158 "v7" 159 127.0.0.1:6380> SHUTDOWN 160 not connected> exit 161 [root@localhost myredis]# redis-server redis80.conf 162 [root@localhost myredis]# redis-cli -p 6380 163 127.0.0.1:6380> info replication 164 # Replication 165 role:master 166 connected_slaves:0 167 master_repl_offset:0 168 repl_backlog_active:0 169 repl_backlog_size:1048576 170 repl_backlog_first_byte_offset:0 171 repl_backlog_histlen:0 172 127.0.0.1:6380> get k8 173 (nil) 174 127.0.0.1:6380> SLAVEOF 127.0.0.1 6379 175 OK 176 127.0.0.1:6380> get k8 177 "v8" 178 127.0.0.1:6380> 179 180 # 6381端口 slave从机 181 [root@localhost myredis]# ls 182 redis79.conf redis80.conf redis81.conf 183 [root@localhost myredis]# redis-server redis81.conf 184 [root@localhost myredis]# redis-cli -p 6381 185 127.0.0.1:6381> keys * 186 (empty list or set) 187 127.0.0.1:6381> info replication 188 # Replication 189 role:master 190 connected_slaves:0 191 master_repl_offset:0 192 repl_backlog_active:0 193 repl_backlog_size:1048576 194 repl_backlog_first_byte_offset:0 195 repl_backlog_histlen:0 196 127.0.0.1:6381> SLAVEOF 127.0.0.1 6379 197 OK 198 127.0.0.1:6381> get k4 199 "v4" 200 127.0.0.1:6381> get k2 201 "v2" 202 127.0.0.1:6381> info replication 203 # Replication 204 role:slave 205 master_host:127.0.0.1 206 master_port:6379 207 master_link_status:up 208 master_last_io_seconds_ago:10 209 master_sync_in_progress:0 210 slave_repl_offset:585 211 slave_priority:100 212 slave_read_only:1 213 connected_slaves:0 214 master_repl_offset:0 215 repl_backlog_active:0 216 repl_backlog_size:1048576 217 repl_backlog_first_byte_offset:0 218 repl_backlog_histlen:0 219 127.0.0.1:6381> get k5 220 "v5" 221 127.0.0.1:6381> set k6 v666 222 (error) READONLY You can't write against a read only slave. 223 127.0.0.1:6381> keys * 224 1) "k5" 225 2) "k1" 226 3) "k6" 227 4) "k4" 228 5) "k2" 229 6) "k3" 230 127.0.0.1:6381> info replication 231 # Replication 232 role:slave 233 master_host:127.0.0.1 234 master_port:6379 235 master_link_status:down 236 master_last_io_seconds_ago:-1 237 master_sync_in_progress:0 238 slave_repl_offset:1525 239 master_link_down_since_seconds:43 240 slave_priority:100 241 slave_read_only:1 242 connected_slaves:0 243 master_repl_offset:0 244 repl_backlog_active:0 245 repl_backlog_size:1048576 246 repl_backlog_first_byte_offset:0 247 repl_backlog_histlen:0 248 127.0.0.1:6381> get k8 249 "v8" 250 127.0.0.1:6381>
总结:
存在的问题描述:
如果说从机多了也不好,这样就会给主机压力太大了
1.如果说master这台之前就设置了三个值,而slave这两台在设置之后才slaveof master的话,得到的结论这slave两台机器也可以获取到之前master设置的值,这里就是全量复制,如果现在我又在master主机上设置值,之后就是增量复制了
2.如果三台机器都要设置同一个值 (这里就证明了读写分离,只有主机才能写)
3.出故障,这些机器都有可能宕机
如果master主机挂了,从机还是slave状态,主机master回来了,依然照旧,主机还是主机,从机还是从机,并且设置值都能获取到
如果slave从机挂了,要从新连接一次除非写进配置文件,这时候主机下只有一个slave从机了,当然它们都正常,挂掉的从机从新启动起来就变成master主机了,而且如果现在的master主机再设置新的值,挂掉的这个从机就获取不到主机设置的值了,若要跟上master这台主机,需要从新连接,除非在配置文件中写入slaveof
薪火相传:
- 上一个slave可以是下一个的slave的master,slave同样可以接收其他slaves的连接和同步请求,那么该slave作为链条中的下一个master,可以有效减轻master的写压力
- 中途变更转向:会清除之前的数据,重新建立拷贝最新的
- slaveof 新主库ip 新主库端口
1 # 薪火相传 2 # 6379端口 主机master 3 127.0.0.1:6379> info replication 4 # Replication 5 role:master 6 connected_slaves:1 7 slave0:ip=127.0.0.1,port=6380,state=online,offset=2826,lag=0 8 master_repl_offset:2826 9 repl_backlog_active:1 10 repl_backlog_size:1048576 11 repl_backlog_first_byte_offset:2 12 repl_backlog_histlen:2825 13 127.0.0.1:6379> set k9 v9 14 OK 15 127.0.0.1:6379> keys * 16 1) "k6" 17 2) "k3" 18 3) "k5" 19 4) "k2" 20 5) "k9" 21 6) "k8" 22 7) "k1" 23 8) "k7" 24 9) "k4" 25 127.0.0.1:6379> 26 # 6380端口 从机slave 27 127.0.0.1:6380> info replication 28 # Replication 29 role:slave 30 master_host:127.0.0.1 31 master_port:6379 32 master_link_status:up 33 master_last_io_seconds_ago:3 34 master_sync_in_progress:0 35 slave_repl_offset:2812 36 slave_priority:100 37 slave_read_only:1 38 connected_slaves:1 39 slave0:ip=127.0.0.1,port=6381,state=online,offset=71,lag=1 40 master_repl_offset:71 41 repl_backlog_active:1 42 repl_backlog_size:1048576 43 repl_backlog_first_byte_offset:2 44 repl_backlog_histlen:70 45 127.0.0.1:6380> get k9 46 "v9" 47 127.0.0.1:6380> keys * 48 1) "k5" 49 2) "k6" 50 3) "k2" 51 4) "k9" 52 5) "k8" 53 6) "k4" 54 7) "k1" 55 8) "k3" 56 9) "k7" 57 127.0.0.1:6380> 58 59 # 从机slave 6381端口 60 127.0.0.1:6381> SLAVEOF 127.0.0.1 6380 61 OK 62 127.0.0.1:6381> info replication 63 # Replication 64 role:slave 65 master_host:127.0.0.1 66 master_port:6380 67 master_link_status:up 68 master_last_io_seconds_ago:7 69 master_sync_in_progress:0 70 slave_repl_offset:43 71 slave_priority:100 72 slave_read_only:1 73 connected_slaves:0 74 master_repl_offset:0 75 repl_backlog_active:0 76 repl_backlog_size:1048576 77 repl_backlog_first_byte_offset:0 78 repl_backlog_histlen:0 79 127.0.0.1:6381> get k9 80 "v9" 81 127.0.0.1:6381> keys * 82 1) "k9" 83 2) "k4" 84 3) "k6" 85 4) "k8" 86 5) "k3" 87 6) "k2" 88 7) "k5" 89 8) "k1" 90 9) "k7" 91 127.0.0.1:6381>
总结:
确实能够解决主机的写压力,如果中途要变更,就要从新拷贝数据
反客为主:
- slaveof no one 使当前数据库停止与其它数据库同步,转成主数据库
1 # 反客为主 2 # 80 81变成一主一从成为一套体系,而之前79回来独自一套体系 3 # 6379端口 4 127.0.0.1:6379> SHUTDOWN 5 not connected> exit 6 [root@localhost myredis]# redis-server redis79.conf 7 [root@localhost myredis]# redis-cli 8 127.0.0.1:6379> info replication 9 # Replication 10 role:master 11 connected_slaves:0 12 master_repl_offset:0 13 repl_backlog_active:0 14 repl_backlog_size:1048576 15 repl_backlog_first_byte_offset:0 16 repl_backlog_histlen:0 17 127.0.0.1:6379> get k10 18 (nil) 19 127.0.0.1:6379> 20 21 # 6380端口 22 127.0.0.1:6380> SLAVEOF no one 23 OK 24 127.0.0.1:6380> info replication 25 # Replication 26 role:master 27 connected_slaves:0 28 master_repl_offset:1229 29 repl_backlog_active:1 30 repl_backlog_size:1048576 31 repl_backlog_first_byte_offset:2 32 repl_backlog_histlen:1228 33 127.0.0.1:6380> set k10 v10 34 OK 35 127.0.0.1:6380> get k10 36 "v10" 37 127.0.0.1:6380> 38 39 # 6381端口 40 127.0.0.1:6381> info replication 41 # Replication 42 role:slave 43 master_host:127.0.0.1 44 master_port:6379 45 master_link_status:down 46 master_last_io_seconds_ago:-1 47 master_sync_in_progress:0 48 slave_repl_offset:3886 49 master_link_down_since_seconds:138 50 slave_priority:100 51 slave_read_only:1 52 connected_slaves:0 53 master_repl_offset:0 54 repl_backlog_active:0 55 repl_backlog_size:1048576 56 repl_backlog_first_byte_offset:0 57 repl_backlog_histlen:0 58 127.0.0.1:6381> SLAVEOF 127.0.0.1 6380 59 OK 60 127.0.0.1:6381> get k10 61 "v10" 62 127.0.0.1:6381>
总结:
当6379端口挂了,要手动的进行选择主机(这里选的6380),而且还要对(从机与主机进行连接)
复制原理:
- slave启动成功连接到master后悔发送一个sync命令
- master接收到命令启动后台的存盘进程,同时收集所有接收到的用于修改数据集命令
- 在后台进程执行完毕之后,master将传送整个数据文件到slave,以完成一次完全同步
- 全量复制:slave服务在接收到数据库文件数据后,将其存盘并加载到内存中
- 增量复制:master继续讲新的所有收集到的修改命令一次传给slave,完成同步,但是只要是重新连接master,一次完全同步(全量复制)将被自动执行
哨兵模式(sentinel)
使用步骤:
- 我这里调整结构6379带着6380和6381
- 自定义的/myredis目录下新建sentinel.conf文件
- 配置哨兵:sentinel monitor 被监控数据库名字(自己起名字)127.0.0.1 1
- 上面最后一个数字1,表示主机挂掉后salve投票谁接替成为主机,投票者多的成为master
- 启动哨兵:Redis-sentinel /myredis/sentinel.conf
- 原有的master挂了
- 投票新选择
- 重新主从继续开工,info replication查查看
- 问题:如果之前的master重启回来,会不会master冲突?(不会,就会变成slave从机)
- 一组sentinel能同时监控多个master
1 # 哨兵模式 2 # 结论:如果经过投票之后选出master,之前master回来了,就变成slave从机了 3 # 宿主机操作步骤: 4 [root@localhost myredis]# ls 5 redis79.conf redis80.conf redis81.conf 6 [root@localhost myredis]# touch sentinel.conf 7 [root@localhost myredis]# ls 8 redis79.conf redis80.conf redis81.conf sentinel.conf 9 [root@localhost myredis]# vim sentinel.conf 10 # 在sentinel.conf文件中写入sentinel monitor host6379 127.0.0.1 6379 1 11 # 启动哨兵 12 [root@localhost myredis]# redis-sentinel /opt/myredis/sentinel.conf 13 7724:X 05 Sep 18:16:16.414 * Increased maximum number of open files to 10032 (it was originally set to 1024). 14 _._ 15 _.-``__ ''-._ 16 _.-`` `. `_. ''-._ Redis 3.2.12 (00000000/0) 64 bit 17 .-`` .-```. ```/ _.,_ ''-._ 18 ( ' , .-` | `, ) Running in sentinel mode 19 |`-._`-...-` __...-.``-._|'` _.-'| Port: 26379 20 | `-._ `._ / _.-' | PID: 7724 21 `-._ `-._ `-./ _.-' _.-' 22 |`-._`-._ `-.__.-' _.-'_.-'| 23 | `-._`-._ _.-'_.-' | http://redis.io 24 `-._ `-._`-.__.-'_.-' _.-' 25 |`-._`-._ `-.__.-' _.-'_.-'| 26 | `-._`-._ _.-'_.-' | 27 `-._ `-._`-.__.-'_.-' _.-' 28 `-._ `-.__.-' _.-' 29 `-._ _.-' 30 `-.__.-' 31 32 7724:X 05 Sep 18:16:16.416 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 33 7724:X 05 Sep 18:16:16.419 # Sentinel ID is xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 34 7724:X 05 Sep 18:16:16.419 # +monitor master host6379 127.0.0.1 6379 quorum 1 35 7724:X 05 Sep 18:16:16.420 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ host6379 127.0.0.1 6379 36 7724:X 05 Sep 18:16:16.422 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ host6379 127.0.0.1 6379 37 7724:X 05 Sep 18:18:33.297 # +sdown master host6379 127.0.0.1 6379 38 7724:X 05 Sep 18:18:33.297 # +odown master host6379 127.0.0.1 6379 #quorum 1/1 39 7724:X 05 Sep 18:18:33.297 # +new-epoch 1 40 7724:X 05 Sep 18:18:33.297 # +try-failover master host6379 127.0.0.1 6379 41 7724:X 05 Sep 18:18:33.369 # +vote-for-leader xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1 42 7724:X 05 Sep 18:18:33.369 # +elected-leader master host6379 127.0.0.1 6379 43 7724:X 05 Sep 18:18:33.369 # +failover-state-select-slave master host6379 127.0.0.1 6379 44 7724:X 05 Sep 18:18:33.470 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ host6379 127.0.0.1 6379 45 7724:X 05 Sep 18:18:33.470 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ host6379 127.0.0.1 6379 46 7724:X 05 Sep 18:18:33.537 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ host6379 127.0.0.1 6379 47 7724:X 05 Sep 18:18:33.879 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ host6379 127.0.0.1 6379 48 7724:X 05 Sep 18:18:33.879 # +failover-state-reconf-slaves master host6379 127.0.0.1 6379 49 7724:X 05 Sep 18:18:33.934 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ host6379 127.0.0.1 6379 50 7724:X 05 Sep 18:18:34.137 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ host6379 127.0.0.1 6379 51 7724:X 05 Sep 18:18:35.153 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ host6379 127.0.0.1 6379 52 7724:X 05 Sep 18:18:35.206 # +failover-end master host6379 127.0.0.1 6379 53 7724:X 05 Sep 18:18:35.206 # +switch-master host6379 127.0.0.1 6379 127.0.0.1 6381 54 7724:X 05 Sep 18:18:35.206 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ host6379 127.0.0.1 6381 55 7724:X 05 Sep 18:18:35.206 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ host6379 127.0.0.1 6381 56 7724:X 05 Sep 18:19:05.227 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ host6379 127.0.0.1 6381 57 7724:X 05 Sep 18:23:53.867 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ host6379 127.0.0.1 6381 58 7724:X 05 Sep 18:24:03.814 * +convert-to-slave slave 127.0.0.1:6379 127.0.0.1 6379 @ host6379 127.0.0.1 6381 59 60 # 79端口操作 之前master机器 61 127.0.0.1:6379> info replication 62 # Replication 63 role:master 64 connected_slaves:2 65 slave0:ip=127.0.0.1,port=6380,state=online,offset=57,lag=0 66 slave1:ip=127.0.0.1,port=6381,state=online,offset=57,lag=0 67 master_repl_offset:57 68 repl_backlog_active:1 69 repl_backlog_size:1048576 70 repl_backlog_first_byte_offset:2 71 repl_backlog_histlen:56 72 127.0.0.1:6379> keys * 73 1) "k3" 74 2) "k9" 75 3) "k5" 76 4) "k4" 77 5) "k6" 78 6) "k8" 79 7) "k1" 80 8) "k7" 81 9) "k2" 82 83 127.0.0.1:6379> SHUTDOWN 84 not connected> exit 85 [root@localhost myredis]# redis-server 86 /opt/myredis/redis79.conf 87 [root@localhost myredis]# redis-cli -p 6379 88 127.0.0.1:6379> info replication 89 # Replication 90 role:slave 91 master_host:127.0.0.1 92 master_port:6381 93 master_link_status:up 94 master_last_io_seconds_ago:1 95 master_sync_in_progress:0 96 slave_repl_offset:22499 97 slave_priority:100 98 slave_read_only:1 99 connected_slaves:0 100 master_repl_offset:0 101 repl_backlog_active:0 102 repl_backlog_size:1048576 103 repl_backlog_first_byte_offset:0 104 repl_backlog_histlen:0 105 127.0.0.1:6379> get k10 106 "v10" 107 127.0.0.1:6379> 108 109 # 80端口 从机slave 110 127.0.0.1:6380> SLAVEOF 127.0.0.1 6379 111 OK 112 127.0.0.1:6380> keys * 113 1) "k5" 114 2) "k6" 115 3) "k2" 116 4) "k9" 117 5) "k8" 118 6) "k4" 119 7) "k1" 120 8) "k7" 121 9) "k3" 122 127.0.0.1:6380> info replication 123 # Replication 124 role:slave 125 master_host:127.0.0.1 126 master_port:6381 127 master_link_status:up 128 master_last_io_seconds_ago:1 129 master_sync_in_progress:0 130 slave_repl_offset:8172 131 slave_priority:100 132 slave_read_only:1 133 connected_slaves:0 134 master_repl_offset:0 135 repl_backlog_active:0 136 repl_backlog_size:1048576 137 repl_backlog_first_byte_offset:2 138 repl_backlog_histlen:16845 139 127.0.0.1:6380> get k10 140 "v10" 141 127.0.0.1:6380> 142 143 # 81端口 之前的slave从机通过哨兵模式投票选举成为master主机 144 127.0.0.1:6381> SLAVEOF 127.0.0.1 6379 145 OK 146 127.0.0.1:6381> keys * 147 1) "k9" 148 2) "k6" 149 3) "k4" 150 4) "k2" 151 5) "k8" 152 6) "k3" 153 7) "k5" 154 8) "k1" 155 9) "k7" 156 127.0.0.1:6381> info replication 157 # Replication 158 role:master 159 connected_slaves:1 160 slave0:ip=127.0.0.1,port=6380,state=online,offset=7227,lag=1 161 master_repl_offset:7227 162 repl_backlog_active:1 163 repl_backlog_size:1048576 164 repl_backlog_first_byte_offset:2 165 repl_backlog_histlen:7226 166 127.0.0.1:6381> set k10 v10 167 OK 168 127.0.0.1:6381> get k10 169 "v10" 170 127.0.0.1:6381> info replication 171 # Replication 172 role:master 173 connected_slaves:2 174 slave0:ip=127.0.0.1,port=6380,state=online,offset=95845,lag=0 175 slave1:ip=127.0.0.1,port=6379,state=online,offset=95845,lag=0 176 master_repl_offset:95845 177 repl_backlog_active:1 178 repl_backlog_size:1048576 179 repl_backlog_first_byte_offset:2 180 repl_backlog_histlen:95844 181 127.0.0.1:6381>
主从复制的缺点:
由于所有的写操作都是在master上操作,然后同步更新到slave上,所有在同步的时候有一定的延迟,如果系统复杂庞大繁忙的时候,延迟问题就会越加严重,同时slave从机越多延迟也会加重。界于这些问题,所有尽量避免一些系统繁忙的情况啊,slave从机要适当的操作。