环境准备
- 三台 centos 7.5 服务器
- rabbitmq 安装所需依赖Erlang (使用 rpm 安装)
- rabbitmq 安装包(使用rpm安装)
服务器IP
服务器 | IP | hostname |
---|---|---|
rabbit-50 | 192.168.86.50 | node0 |
rabbit-51 | 192.168.86.51 | node1 |
rabbit-52 | 192.168.86.52 | node2 |
单机模式部署
-
根据服务器的内核下载对应的 Erlang 版本、rabbitmq 的版本,并且将两个文件上传到服务器 /opt 目录下
uname -r
-
yum install socat logrotate -y
-
安装依赖 Erlang
cd /opt #进入上传目录 rpm -ivh erlang-23.3.4.4-1.el7.x86_64.rpm
-
安装 rabbitmq
rpm -ivh rabbitmq-server-3.8.19-1.el7.noarch.rpm
-
设置开机自启动
chkconfig rabbitmq-server on
-
安装rabbitmq_management 插件用于前台管理
rabbitmq-plugins enable rabbitmq_management
-
rabbitmqctl add_user root s3crEt rabbitmqctl set_user_tags root administrator rabbitmqctl set_permissions --vhost '/' root '.*' '.*' '.*'
-
启动、停止、查看状态
/sbin/service rabbitmq-server status #查看状态 /sbin/service rabbitmq-server start #启动 /sbin/service rabbitmq-server stop #停止
-
前台浏览器访问 http://localhost:15672
集群模式部署
-
修改三台机器的主机名称: vi /etc/hostname
-
配置各个节点的hosts文件,让各个节点能够互相识别对方: vi /etc/host
192.168.86.50 node0 192.168.86.51 node1 192.168.86.52 node2
-
重启服务器
-
按照单机模式部署的步骤,每台机器从第1步执行到第6步结束
-
确保每个服务器的 erlang.cookie 文件使用的是同一个值,在node0执行以下命令:
scp /var/lib/rabbitmq/.erlang.cookie root@node1:/var/lib/rabbitmq/.erlang.cookie scp /var/lib/rabbitmq/.erlang.cookie root@node2:/var/lib/rabbitmq/.erlang.cookie
-
启动 rabbitmq 服务,在三台机器上执行以下命令
systemctl start rabbitmq-server
-
在 node1 执行以下命令
rabbitmqctl join_cluster rabbit@node0 rabbitmqctl start_app
-
在node2执行以下命令
rabbitmqctl join_cluster rabbit@node1 rabbitmqctl start_app
-
查看集群状态
rabbitmqctl cluster_status
-
创建用户及赋权
rabbitmqctl add_user admin 123 #创建用户 rabbitmqctl set_user_tags admin administrator #设置用户角色 rabbitmqctl set_permissions -p '/' admin '.*' '.*' '.*' #设置用户权限
-
随便打开一个节点的管理界面查看集群状态:
使用 haproxy + keepalived 实现高可用负载均衡
架构图
安装 haproxy
-
在集群部署的基础上执行以下操作
-
在 node0 和 node1 下载安装 haproxy
yum install haproxy
-
修改 node0 和 node1 的 haproxy.cfg : vi /etc/haproxy/haproxy.cfg
# 绑定配置 listen rabbitmq_admin bind :15673 balance roundrobin server node0 node0:15672 server node1 node1:15672 server node2 node2:15672 # 绑定配置 listen rabbitmq_cluster bind :5673 # 采用加权轮询的机制进行负载均衡 balance roundrobin # RabbitMQ 集群节点配置 #每隔 5 秒进行一次健康检查,如果连续两次的检查结果都是正常,则认为该节点可用,此时可以将客户端的请求轮询到该节点上;如果连续 3 次的检查结果都不正常,则认为该节点不可用。weight 用于指定节点在轮询过程中的权重 server node0 node0:5672 check inter 5000 rise 2 fall 3 weight 1 server node1 node0:5672 check inter 5000 rise 2 fall 3 weight 1 server node2 node0:5672 check inter 5000 rise 2 fall 3 weight 1 # 配置监控页面 listen monitor bind :8100 mode http option httplog stats enable stats uri /stats stats refresh 5s stats auth admin:123
-
在两个节点启动 haproxy
haproxy -f /etc/haproxy/haproxy.cfg ps -ef | grep haproxy
安装 keepalived
-
下载 keepalived ,并且上传到 /opt 目录下
-
安装依赖之后进行编译
#安装依赖 yum -y install libnl libnl-devel yum install gcc gcc-c++ openssl openssl-devel -y yum update glib* -y #解压 cd /opt tar -zxvf keepalived-2.2.2.tar.gz #编译安装 mkdir /opt/keepalived-2.2.2/build cd /opt/keepalived-2.2.2/build ../configure --prefix=/usr/local/keepalived-2.2.2 make && make install
-
环境配置
# 创建目录 mkdir /etc/keepalived # 备份 cp /usr/local/keepalived-2.2.2/etc/keepalived/keepalived.conf /usr/local/keepalived-2.2.2/etc/keepalived/keepalived.conf_bak # 链接 ln -s /usr/local/keepalived-2.2.2/etc/keepalived/keepalived.conf /etc/keepalived/
-
将所有 Keepalived 脚本拷贝到 /etc/init.d/ 目录下:
# 编译目录中的脚本 cp /opt/keepalived-2.2.2/keepalived/etc/init.d/keepalived /etc/init.d/ # 安装目录中的脚本 cp /usr/local/keepalived-2.2.2/etc/sysconfig/keepalived /etc/sysconfig/ cp /usr/local/keepalived-2.2.2/sbin/keepalived /usr/sbin/
-
设置开机自启动:
chmod +x /etc/init.d/keepalived chkconfig --add keepalived systemctl enable keepalived.service
-
配置 Keepalived,注意主备的不同之处:
vi /etc/keepalived/keepalived.conf
内容如下:
global_defs { # 路由id,主备节点不能相同,node0 为 rabbit-0,node1 为 rabbit-1 router_id rabbit-0 } # 自定义监控脚本 vrrp_script chk_haproxy { # 脚本位置 script "/etc/keepalived/haproxy_check.sh" # 脚本执行的时间间隔 interval 5 weight 10 } vrrp_instance VI_1 { # Keepalived的角色,MASTER 表示主节点,BACKUP 表示备份节点 # node0 为 MASTER ,node1 为 BACKUP state MASTER # 指定监测的网卡,可以使用 ifconfig 进行查看 interface ens33 # 虚拟路由的id,主备节点需要设置为相同 virtual_router_id 1 # 优先级,主节点的优先级需要设置比备份节点高 # node0 为 100, node1 为 80 priority 100 # 设置主备之间的检查时间,单位为秒 advert_int 1 # 定义验证类型和密码 authentication { auth_type PASS auth_pass 123456 } # 调用上面自定义的监控脚本 track_script { chk_haproxy } virtual_ipaddress { # 虚拟IP地址,可以设置多个 192.168.86.53 } }
-
编写HAProxy状态检测脚本
# 创建存放检测脚本的日志目录 mkdir -p /usr/local/keepalived-2.2.2/log # 创建检测脚本 vi /etc/keepalived/haproxy_check.sh
内容如下:
#!/bin/bash LOGFILE="/usr/local/keepalived-2.2.2/log/haproxy-check.log" echo "[$(date)]:check_haproxy status" >> $LOGFILE # 判断haproxy是否已经启动 HAProxyStatusA=`ps -C haproxy --no-header|wc -l` if [ $HAProxyStatusA -eq 0 ];then echo "[$(date)]:启动haproxy服务......" >> $LOGFILE # 如果没有启动,则启动 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg >> $LOGFILE 2>&1 fi # 睡眠5秒以便haproxy完全启动 sleep5 # 如果haproxy还是没有启动,此时需要将本机的keepalived服务停掉,以便让VIP自动漂移到另外一台haproxy HAProxyStatusB=`ps -C haproxy --no-header|wc -l` if [ $HAProxyStatusB eq 0 ];then echo "[$(date)]:haproxy启动失败,睡眼5秒后haproxy服务还是没有启动,现在关闭keepalived服务,以便让VIP自动漂移到另外一台haproxy" >> $LOGFILE systemctl stop keepalived fi
-
为脚本赋予执行权限
chmod +x /etc/keepalived/haproxy_check.sh
-
设置开启自启动
# 将编译目录中的keepalived脚本复制到/etc/init.d/目录下 cp /opt/keepalived-2.2.2/keepalived/etc/init.d/keepalived /etc/init.d/ # 将安装目录中的keepalived脚本复制到/etc/sysconfig/目录下 cp /usr/local/keepalived-2.2.2/etc/sysconfig/keepalived /etc/sysconfig/ # 将安装目录中的keepalived脚本链接到/usr/sbin/目录下 ln -s /usr/local/keepalived-2.2.2/sbin/keepalived /usr/sbin/ chmod +x /etc/init.d/keepalived chkconfig --add keepalived systemctl enable keepalived.service
-
启动服务
systemctl start keepalived
-
测试
a. 访问 http://192.168.86.53:15673/
b. 关闭主机(可以使用kill命令),观察从机状态
c. 访问 http://192.168.86.53:15673/
d. 重启启动主机的 keepalived ,可以发现 vip 重新飘回主机
常见问题
-
如何解除集群节点?
#node1 上执行 rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl start_app rabbitmqctl cluster_status #node0 上执行 rabbitmqctl forget_cluster_node rabbit@node1