使用saltstack编译安装keepalived:
创建相应的目录,并在目录下创建相应的sls配置文件
[root@node1 ~]# mkdir /srv/salt/prod/keepalived [root@node1 ~]# mkdir /srv/salt/prod/keepalived/files
1、使用saltstack进行编译安装keepalived
1.1将下载好的keepalived源码包放置在keepalived目录下面的files目录中(files目录提供需要用的源码包,文件等)
[root@node1 etc]# pwd /usr/local/src/keepalived-1.3.6/keepalived/etc [root@node1 etc]# cp keepalived/keepalived.conf /srv/salt/prod/keepalived/files/ [root@node1 etc]# cp init.d/keepalived /srv/salt/prod/keepalived/files/keepalived.init [root@node1 sysconfig]# pwd /usr/local/src/keepalived-1.3.6/keepalived/etc/sysconfig [root@node1 sysconfig]# cp keepalived /srv/salt/prod/keepalived/files/keepalived.sysconfig
查看files目录下面文件:
[root@node1 keepalived]# ll files/ total 696 -rw-r--r-- 1 root root 702570 Oct 10 22:21 keepalived-1.3.6.tar.gz -rwxr-xr-x 1 root root 1335 Oct 10 22:17 keepalived.init -rw-r--r-- 1 root root 667 Oct 10 22:28 keepalived.sysconfig
1.2haproxy的源码包和启动脚本准备好后,开始进行安装keepalived
[root@node1 keepalived]# pwd /srv/salt/prod/keepalived [root@node1 keepalived]# cat install.sls include: - pkg.pkg-init keepalived-install: file.managed: - name: /usr/local/src/keepalived-1.3.6.tar.gz - source: salt://keepalived/files/keepalived-1.3.6.tar.gz - user: root - group: root - mode: 755 cmd.run: - name: cd /usr/local/src/ && tar xf keepalived-1.3.6.tar.gz && cd keepalived-1.3.6 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install - unless: test -d /usr/local/keepalived - require: - pkg: pkg-init - file: keepalived-install keepalived-init: file.managed: - name: /etc/init.d/keepalived - source: salt://keepalived/files/keepalived.init - user: root - group: root - mode: 755 cmd.run: - name: chkconfig --add keepalived - unless: chkconfig --list|grep keepalived - require: - file: /etc/init.d/keepalived /etc/sysconfig/keepalived: file.managed: - source: salt://keepalived/files/keepalived.sysconfig - user: root - group: root - mode: 644 /etc/keepalived: file.directory: - user: root - group: root - mode: 755
总结上面配置文件包括:1、include进来编译环境 2、编译安装keepalived 3、添加keepalived脚本文件,并添加到系统服务中 4、复制keepalived.sysconfig文件 5、创建keepalived配置文件目录
执行install.sls文件,安装keepalived:
[root@node1 keepalived]# salt 'node1' state.sls keepalived.install saltenv=prod
3、安装完keepalived后,并且keepalived已经有了启动脚本,接下来需要给keepalived提供配置文件,最后将keepalived服务开启,由于根据业务需求的不同,可能用到的keepalived的配置文件会有区别,
所以这里将配置文件与keepalived的安装分隔开进行状态管理配置,以后minion的keepalived可以根据配置文件的不同而提供安装
[root@node1 cluster]# pwd /srv/salt/prod/cluster [root@node1 cluster]# cat haproxy-outside-keepalived.sls haproxy与keepalived结合使用的高可用 include: - keepalived.install keepalived-service: file.managed: - name: /etc/keepalived/keepalived.conf - source: salt://cluster/files/haproxy-outside-keepalived.conf - user: root - group: root - mode: 644 - template: jinja jinja模板调用,使用变量 {% if grains['fqdn'] == 'node1' %} 基于节点的fqdn信息来赋予变量值 - ROUTEID: haproxy_node1 - STATEID: MASTER - PRIORITYID: 150 {% elif grains['fqdn'] == 'node2' %} - ROUTEID: haproxy_node2 - STATEID: BACKUP - PRIORITYID: 100 {% endif %} service.running: - name: keepalived - enable: True - reload: True - watch: - file: keepalived-service
总结上述配置文件内容:1、include进来keepalived的安装 2、给各节点提供不同的配置文件,用到了jinja模板调用grains 3、开启keepalived服务,并开启自启动
最后将keepalived项目添加到top.sls文件中:
[root@node1 base]# cat top.sls base: '*': - init.env_init prod: 'node1': - cluster.haproxy-outside - cluster.haproxy-outside-keepalived
整个keepalived项目构架图:
[root@node1 keepalived]# tree . ├── files │ ├── keepalived-1.3.6.tar.gz │ ├── keepalived.init │ └── keepalived.sysconfig └── install.sls 1 directory, 4 files [root@node1 keepalived]# cd ../cluster/ [root@node1 cluster]# tree . ├── files │ ├── haproxy-outside.cfg │ └── haproxy-outside-keepalived.conf ├── haproxy-outside-keepalived.sls └── haproxy-outside.sls
node1节点安装没有问题,那么更改top.sls中节点设置,将node2节点也给添加上:
[root@node1 base]# cat top.sls base: '*': - init.env_init prod: '*': 只有两个节点,所以这里*代替了 - cluster.haproxy-outside - cluster.haproxy-outside-keepalived
执行状态配置文件:
[root@node1 base]# salt '*' state.highstate
查看node2状态:
[root@node2 ~]# netstat -tunlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.44.10:80 0.0.0.0:* LISTEN 16791/haproxy tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1279/sshd tcp 0 0 0.0.0.0:8090 0.0.0.0:* LISTEN 16791/haproxy tcp 0 0 :::8080 :::* LISTEN 14351/httpd tcp 0 0 :::22 :::* LISTEN 1279/sshd udp 0 0 0.0.0.0:68 0.0.0.0:* 1106/dhclient
可以看见haproxy已经监听起来了,监听在了一个不是自己实际ip的地址上
查看node1的vip信息:
[root@node1 files]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:86:2C:63 inet addr:192.168.44.134 Bcast:192.168.44.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe86:2c63/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230013 errors:0 dropped:0 overruns:0 frame:0 TX packets:172530 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:130350592 (124.3 MiB) TX bytes:19244347 (18.3 MiB) eth0:0 Link encap:Ethernet HWaddr 00:0C:29:86:2C:63 inet addr:192.168.44.10 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:145196 errors:0 dropped:0 overruns:0 frame:0 TX packets:145196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12285984 (11.7 MiB) TX bytes:12285984 (11.7 MiB)
可以看见eth0:0就是vip,手动将keepalived停止,查看vip是否漂移到nide2?
[root@node1 files]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ]
查看node2状态:
[root@node2 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:34:32:CB inet addr:192.168.44.135 Bcast:192.168.44.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe34:32cb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:494815 errors:0 dropped:0 overruns:0 frame:0 TX packets:357301 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:250265303 (238.6 MiB) TX bytes:98088504 (93.5 MiB) eth0:0 Link encap:Ethernet HWaddr 00:0C:29:34:32:CB inet addr:192.168.44.10 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2953 errors:0 dropped:0 overruns:0 frame:0 TX packets:2953 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1272983 (1.2 MiB) TX bytes:1272983 (1.2 MiB)
于是haproxy结合keepalived的高可用基于saltstack安装成功,下面为haproxy和keepalived的简单配置文件:
haproxy配置文件:
[root@node1 files]# pwd /srv/salt/prod/cluster/files [root@node1 files]# cat haproxy-outside.cfg # # This is a sample configuration. It illustrates how to separate static objects # traffic from dynamic traffic, and how to dynamically regulate the server load. # # It listens on 192.168.1.10:80, and directs all requests for Host 'img' or # URIs starting with /img or /css to a dedicated group of servers. URIs # starting with /admin/stats deliver the stats page. # global maxconn 10000 stats socket /var/run/haproxy.stat mode 600 level admin log 127.0.0.1 local0 uid 200 gid 200 chroot /var/empty daemon defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms # The public 'www' address in the DMZ frontend webserver bind 192.168.44.10:80 default_backend web #bind 192.168.1.10:443 ssl crt /etc/haproxy/haproxy.pem mode http listen base_stats bind *:8090 stats enable stats hide-version stats uri /haproxy?stats stats realm "haproxy statistics" stats auth wadeson:redhat # The static backend backend for 'Host: img', /img and /css. backend web balance roundrobin retries 2 server web1 192.168.44.134:8080 check inter 1000 server web2 192.168.44.135:8080 check inter 1000
keepalived配置文件:
[root@node1 files]# cat haproxy-outside-keepalived.conf ! Configuration File for keepalived global_defs { notification_email { json_hc@163.com } notification_email_from json_hc@163.com smtp_server smtp.163.com smtp_connect_timeout 30 router_id {{ ROUTEID }} } vrrp_instance VI_1 { state {{ STATEID }} interface eth0 virtual_router_id 51 priority {{ PRIORITYID }} advert_int 1 authentication { auth_type PASS auth_pass password } virtual_ipaddress { 192.168.44.10/24 dev eth0 label eth0:0 } }
查看高可用的负载效果: