• MHA实现mysql高可用复制集群


    MHA简述

    MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。

    MHA负责监控主节点。一旦主节点发生故障,它能够自动根据配置去探测从节点中哪一个节点的数据是最新的,然后将该节点提升为新的主节点,当原来主节点恢复上线后,不会变为主节点,只能以从节点的角色工作。

    MHA依赖SSH

    MHA在故障自动切换过程中会尝试从故障的主服务器器上保存二进制日志,最大程度保证数据不丢失,但是如果故障服务器发生硬件故障或者断电,ssh无法通信等是没办法保存数据的。

    MHA 同步(基于SSH)master节点的二进制日志事件到本地为副本,当master宕机后MHA 通过本地的副本将slave的数据补全并将其提升为新的master,将其他的slave节点指向新的master

    MHA工作原理

    MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前淘宝TMHA已经支持一主一从。

    简单总结MHA工作原理

    MHA工作原理

    • 1 从宕机崩溃的master保存二进制日志事件(binlog events)
    • 2 识别含有最新更新的slave
    • 3 应用差异的中继日志(relay log)到其他的slave
    • 4 应用从master保存的二进制日志事件(binlog events)
    • 5 提升一个slave为新的master
    • 6 使其他的slave连接新的master进行复制

    软件组成部分

    MHA软件由两部分组成,Manager工具包和Node工具包

    manager工具包

    • masterha_check_ssh 检查MHA的SSH配置状况
    • masterha_check_repl 检查MySQL复制状况
    • masterha_manger 启动MHA
    • masterha_check_status 检测当前MHA运行状态
    • masterha_master_monitor 检测master是否宕机
    • masterha_master_switch 控制故障转移(自动或者手动)
    • masterha_conf_host 添加或删除配置的server信息

    node工具包:(这些工具通常由MHA Manager的脚本触发,无需人为操作)

    • save_binary_logs 保存和复制master的二进制日志
    • apply_diff_relay_logs 识别差异的中继日志事件并将其差异的事件应用于其他的slave
    • filter_mysqlbinlog 去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
    • purge_relay_logs 清除中继日志(不会阻塞SQL线程)

    注意:为了尽可能的减少主库硬件损坏宕机造成的数据丢失,因此在配置MHA的同时建议配半同步复制

    自定义拓展

    自定义扩展:

    • secondary_check_script:通过多条网络路由检测master的可用性
    • master_ip_ailover_script:更新Application使用的masterip
    • shutdown_script:强制关闭master节点 (故障节点)
    • report_script:发送报告
    • init_conf_load_script:加载初始配置参数(作用不大)
    • master_ip_online_change_script:更新master节点ip地址

    配置文件:

    • global配置,为各application提供默认配置
    • application配置:为每个主从复制集群

    ** 注意点:每一个从节点要开启中继日志和二进制日志,因为每个从节点都有可能成为新的master节点,,成为master要提供二进制日志给其他从节点复制**

    部署MHA

    实验使用4台主机:系统:centos7.4

    • manger:MHA管理节点
    • master:复制集群的主节点
    • slave-1:复制集群从节点1
    • slave-2:复制集群从节点2

    部署前注意事项:

    环境准备:关闭selinux,iptables,各节点时间同步

    1. 配置主从环境
    2. 配置集群秘钥认证:master节点与node节点之间认证
    3. 所有节点创建一个单独的具有管理权限的账号给master连接管理使用,主从环境下,在主库上创建一次就可以了,其他节点会复制过去
    4. 安装master节点管理包:安装包依赖epel源
    5. 在管理节点建立配置文件
    6. 检查环境是否准备好
    7. 开启集群检查(manger)

    环境准备

    时间同步,关闭防火墙,selinux

    
    [root@master ~]# iptables -F
    [root@master ~]# systemctl stop firewalld
    [root@master ~]# setenforce 0
    [root@master ~]# ntpdate gudaoyufu.com
    

    安装mariadb

    安装mariadb

    [root@master ~]# yum install mariadb-server -y    #主备节点3台主机都安装
    

    配置一主多从

    master节点:

    [mysqld]
    log_bin=master-bin
    server_id= 1
    innodb_file_per_table=on
    sync_binlog=1
    skip_name_resolve
    

    创建复制用户

    MariaDB [(none)]> grant replication slave on *.* to copyuser@'192.168.214.%' identified by '123456';
    
    MariaDB [(none)]> flush privileges;
    

    slave-1节点

    [mysqld]
    relay_log=relay-log
    server-id=2
    innodb_file_per_table=on
    skip_name_resolve
    relay_log_purge=0
    log_bin=master-bin
    
    

    slave-2节点

    [mysqld]
    relay_log=relay-log
    server-id=3
    innodb_file_per_table=on
    skip_name_resolve
    relay_log_purge=0
    log_bin=master-bin
    

    各节点修改配置后重启服务

    登录slave-1,连接主库

    MariaDB [(none)]> change master to master_host='192.168.214.142',master_user='copyuser',master_port=3306,master_password='123456',master_log_file='master-bin.000004',master_log_pos=245;
    
    MariaDB [(none)]> start slave;
    

    登录slave-2,连接主库(过程与slave-1一样)

    MariaDB [(none)]> change master to master_host='192.168.214.142',master_user='copyuser',master_port=3306,master_password='123456',master_log_file='master-bin.000004',master_log_pos=245;
    
    MariaDB [(none)]> start slave;
    

    测试主库复制
    登陆master节点创建库测试复制

    MariaDB [(none)]> create database db1;
    

    查看slave-1和slave-2复制情况

    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | db1                |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    

    配置MHA 环境

    配置集群主机秘钥认证

    master节点与node节点之间认证

    [root@manger ~]# ssh-keygen
    [root@manger ~]# ssh-copy-id 192.168.214.143
    
    [root@manger ~]# ls .ssh/
    authorized_keys  id_rsa  id_rsa.pub  known_hosts
    

    将authorized_keys文件复制到各节点主机

    [root@manger ~]# scp -pr .ssh/ 192.168.214.142:/root/
    
    [root@manger ~]# scp -pr .ssh/ 192.168.214.134:/root/
    
    [root@manger ~]# scp -pr .ssh/ 192.168.214.130:/root/
    

    测试秘钥认证

    [root@manger .ssh]# ssh 192.168.214.142 'hostname'
    master
    [root@manger .ssh]# ssh 192.168.214.134 'hostname'
    slave-1
    [root@manger .ssh]# ssh 192.168.214.130 'hostname'
    slave-2
    

    manger节点安装master和node包

    安装包依赖epel源,安装前检查epel源是否安装了,如果没有装:

    [root@manger ~]# yum install epel-release.noarch -y

    manger安装管理包:节点两个包组都安装

    [root@manger ~]# yum install mha4mysql-manager-0.56-0.el6.noarch.rpm mha4mysql-node-0.56-0.el6.noarch.rpm -y
    

    mysql集群各节点安装node包组

    [root@master ~]# yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y
    
    [root@slave-1 ~]# yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y
    
    [root@slave-2 ~]# yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y
    

    配置manger节点

    在mysql集群各节点创建一个具有管理权限的用户,给mha连接使用,在主库上创建,从库上自动复制即可

    MariaDB [(none)]> grant all privileges on *.* to mhauser@'192.168.214.%' identified by '123456';
    
    MariaDB [(none)]> flush privileges;
    

    创建mha管理配置文件目录

    [root@manger ~]# mkdir -p /etc/mha/

    编辑管理配置文件

    [root@manger ~]# vim /etc/mha/app1.cnf

    [server default]
    user=mhauser
    password=123456
    manager_workdir=/data/mastermha/app1/
    manager_log=/data/mastermha/app1/manager.log
    remote_workdir=/data/mastermha/app1/
    ssh_user=root
    repl_user=copyuser
    repl_password=123456
    ping_interval=1
    [server1]
    hostname=192.168.214.142
    candidate_master=1
    [server2]
    hostname=192.168.214.134
    candidate_master=1
    [server3]
    hostname=192.168.214.130
    

    检查manger管理节点配置

    检查ssh连接

    [root@manger ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf 
    Sat Jun 16 18:09:26 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sat Jun 16 18:09:26 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:09:26 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:09:26 2018 - [info] Starting SSH connection tests..
    Sat Jun 16 18:09:29 2018 - [debug] 
    Sat Jun 16 18:09:27 2018 - [debug]  Connecting via SSH from root@192.168.214.134(192.168.214.134:22) to root@192.168.214.142(192.168.214.142:22)..
    Sat Jun 16 18:09:28 2018 - [debug]   ok.
    Sat Jun 16 18:09:28 2018 - [debug]  Connecting via SSH from root@192.168.214.134(192.168.214.134:22) to root@192.168.214.130(192.168.214.130:22)..
    Sat Jun 16 18:09:29 2018 - [debug]   ok.
    Sat Jun 16 18:09:29 2018 - [debug] 
    Sat Jun 16 18:09:26 2018 - [debug]  Connecting via SSH from root@192.168.214.142(192.168.214.142:22) to root@192.168.214.134(192.168.214.134:22)..
    Warning: Permanently added '192.168.214.134' (ECDSA) to the list of known hosts.
    Sat Jun 16 18:09:27 2018 - [debug]   ok.
    Sat Jun 16 18:09:27 2018 - [debug]  Connecting via SSH from root@192.168.214.142(192.168.214.142:22) to root@192.168.214.130(192.168.214.130:22)..
    Sat Jun 16 18:09:28 2018 - [debug]   ok.
    Sat Jun 16 18:09:30 2018 - [debug] 
    Sat Jun 16 18:09:27 2018 - [debug]  Connecting via SSH from root@192.168.214.130(192.168.214.130:22) to root@192.168.214.142(192.168.214.142:22)..
    Warning: Permanently added '192.168.214.142' (ECDSA) to the list of known hosts.
    Sat Jun 16 18:09:29 2018 - [debug]   ok.
    Sat Jun 16 18:09:29 2018 - [debug]  Connecting via SSH from root@192.168.214.130(192.168.214.130:22) to root@192.168.214.134(192.168.214.134:22)..
    Warning: Permanently added '192.168.214.134' (ECDSA) to the list of known hosts.
    Sat Jun 16 18:09:29 2018 - [debug]   ok.
    Sat Jun 16 18:09:30 2018 - [info] All SSH connection tests passed successfully.
    

    检查复制:(这里检查复制出错,错误如下)

    Sat Jun 16 18:11:27 2018 - [error][/usr/share/perl5/vendor_perl/MHA/Server.pm, ln393] 192.168.214.134(192.168.214.134:3306): User copyuser does not exist or does not have REPLICATION SLAVE privilege! Other slaves can not start replication from this host.
    Sat Jun 16 18:11:27 2018 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations.  at /usr/share/perl5/vendor_perl/MHA/ServerManager.pm line 1403.
    Sat Jun 16 18:11:27 2018 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.
    Sat Jun 16 18:11:27 2018 - [info] Got exit code 1 (Not master dead).
    

    原因是在manger管理节点配置文件文件中指定了192.168.214.134为故障后的新主节点,但该节点没有提供复制连接的用户账号,在配置主从环境的时候没有包含到主库创建复制用户的操作的bin-log,在从库添加主库上相同的复制账号

    从库上添加复制账号后再测试

    [root@manger ~]# masterha_check_repl --conf=/etc/mha/app1.cnf 
    Sat Jun 16 18:23:06 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sat Jun 16 18:23:06 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:23:06 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:23:06 2018 - [info] MHA::MasterMonitor version 0.56.
    Sat Jun 16 18:23:07 2018 - [info] GTID failover mode = 0
    Sat Jun 16 18:23:07 2018 - [info] Dead Servers:
    Sat Jun 16 18:23:07 2018 - [info] Alive Servers:
    Sat Jun 16 18:23:07 2018 - [info]   192.168.214.142(192.168.214.142:3306)
    Sat Jun 16 18:23:07 2018 - [info]   192.168.214.134(192.168.214.134:3306)
    Sat Jun 16 18:23:07 2018 - [info]   192.168.214.130(192.168.214.130:3306)
    
    ... ...
    Sat Jun 16 18:23:17 2018 - [info] Checking replication health on 192.168.214.134..
    Sat Jun 16 18:23:17 2018 - [info]  ok.
    Sat Jun 16 18:23:17 2018 - [info] Checking replication health on 192.168.214.130..
    Sat Jun 16 18:23:17 2018 - [info]  ok.
    

    启动MHA监测

    [root@manger ~]# masterha_manager --conf=/etc/mha/app1.cnf 
    Sat Jun 16 18:27:27 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sat Jun 16 18:27:27 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:27:27 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
    

    测试故障转移
    关掉mysql主库,查看监测

    [root@manger ~]# masterha_manager --conf=/etc/mha/app1.cnf 
    Sat Jun 16 18:27:27 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sat Jun 16 18:27:27 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:27:27 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
      Creating /data/mastermha/app1 if not exists..    ok.
      Checking output directory is accessible or not..
       ok.
      Binlog found at /var/lib/mysql, up to master-bin.000005
    Sat Jun 16 18:34:59 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sat Jun 16 18:34:59 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Sat Jun 16 18:34:59 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
    [root@manger ~]# 
    

    在从库上查看最新

    MariaDB [(none)]> show slave statusG
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.214.134
                      Master_User: copyuser
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: master-bin.000003
              Read_Master_Log_Pos: 632
                   Relay_Log_File: relay-log.000002
                    Relay_Log_Pos: 530
            Relay_Master_Log_File: master-bin.000003
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
    

    可以看到,从库的复制状态正常,并且复制的主库已经变成了新定义的master,在新的master节点上写个数据看看

    新master节点:

    MariaDB [(none)]> create database db2;

    slave-2查看复制

    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | db1                |
    | db2                |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    6 rows in set (0.01 sec)
    

    至此,mysql高可用集群MHA就配置好了

    其他问题

    当MHA完成故障转移后,manager管理程序就自动退出运行了,也就是说,MHA只在故障转移的关键时刻顶那么一下。那么退下来的故障节点再重新上线,会不会重新变成master节点呢,答案是不会的,故障节点重新上线后会被孤立,不会自动加入mysql集群中,并且只能以slave的身份加入集群。这个过程需要将新上线的节点配置成slave角色,如果上线后发现与master节点数据落后很多,需要先导入备份后再连接master复制。

    故障节点重新上线

    备份master节点的数据

    [root@slave-1 opt]# mysqldump -A -F --single-transaction --master-data=2 > ./fullbak.sql
    

    清空新上线的节点数据

    systemctl stop mariadb
    rm -rf /var/lib/mysql/*
    systemctl start mariadb

    复制备份数据

    [root@slave-1 opt]# scp fullbak.sql 192.168.214.142:/opt/

    恢复数据

    [root@master ~]# mysql < /opt/fullbak.sql

    查看数据

    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | db1                |
    | db2                |
    | db3                |
    | db4                |
    | db5                |
    | db6                |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    

    配置主从

    [mysqld]
    log_bin=master-bin    #因为该节点将来还有可能成为master节点,所以bin-log保留
    server_id= 1
    innodb_file_per_table=on
    sync_binlog=1
    skip_name_resolve
    relay_log=relay-log
    relay_log_purge
    

    [root@master ~]# systemctl restart mariadb

    连接master复制
    查看最新master的bin-log记录信息

    MariaDB [(none)]> show master status;
    +-------------------+----------+--------------+------------------+
    | File              | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +-------------------+----------+--------------+------------------+
    | master-bin.000004 |      245 |              |                  |
    +-------------------+----------+--------------+------------------+
    
    MariaDB [(none)]>  change master to master_host='192.168.214.134',master_user='copyuser',mport=3306,master_password='123456',master_log_file='master-bin.000004',master_log_pos=245;
    
    MariaDB [(none)]> flush privileges;
    
    MariaDB [(none)]> start slave;
    
    MariaDB [(none)]> show slave statusG
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.214.134
                      Master_User: copyuser
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: master-bin.000004
              Read_Master_Log_Pos: 245
                   Relay_Log_File: relay-log.000002
                    Relay_Log_Pos: 530
            Relay_Master_Log_File: master-bin.000004
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
                  Replicate_Do_DB: 
    

    查看masterdump线程

    MariaDB [(none)]> show processlist;
    +----+----------+-----------------------+------+-------------+-------+-----------------------------------------------------------------------+------------------+----------+
    | Id | User     | Host                  | db   | Command     | Time  | State                                                                 | Info             | Progress |
    +----+----------+-----------------------+------+-------------+-------+-----------------------------------------------------------------------+------------------+----------+
    |  7 | root     | localhost             | NULL | Sleep       | 53279 |                                                                       | NULL             |    0.000 |
    | 22 | copyuser | 192.168.214.130:60116 | NULL | Binlog Dump | 53166 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL             |    0.000 |
    | 30 | copyuser | 192.168.214.142:50656 | NULL | Binlog Dump |   123 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL             |    0.000 |
    | 31 | root     | localhost             | NULL | Query       |     0 | NULL                                                                  | show processlist |    0.000 |
    +----+----------+-----------------------+------+-------------+-------+-----------------------------------------------------------------------+------------------+----------+
    
  • 相关阅读:
    VIM文本替换命令
    VIM格式化代码(How to format code with VIM)
    字符串匹配的Boyer-Moore算法
    Java中数组的遍历
    UVa10723
    uva242,Stamps and Envelope Size
    UVa1630,Folding
    uva1629,Cake Slicing,记忆化搜索
    uva 10118,记忆化搜索
    uva10003
  • 原文地址:https://www.cnblogs.com/anay/p/9193004.html
Copyright © 2020-2023  润新知