• openstack controller ha测试环境搭建记录(三)——配置haproxy


    haproxy.cfg请备份再编辑:
    # vi /etc/haproxy/haproxy.cfg

    global
        chroot /var/lib/haproxy
        daemon
        group haproxy
        maxconn 4000
        pidfile /var/run/haproxy.pid
        user haproxy

    defaults
        log global
        maxconn 4000
        option redispatch
        retries 3
        timeout http-request 10s
        timeout queue 1m
        timeout connect 10s
        timeout client 1m
        timeout server 1m
        timeout check 10s

    listen galera_cluster
        bind 10.0.0.10:3306
        balance source
        option httpchk
        server controller2 10.0.0.12:3306 check port 9200 inter 2000 rise 2 fall 5
        server controller3 10.0.0.13:3306 check port 9200 inter 2000 rise 2 fall 5


    每个节点都要编辑haproxy.cfg。
    此处只是测试haproxy,为每个节点的3306端口做负责平衡。3306是mariadb的监听端口,此处未安装,将在下一篇文章中安装。


    由于需要将集群资源绑定到VIP,需要修改各节点的内核参数:
    # echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf
    # sysctl -p


    在集群中增加haproxy服务资源:
    # crm configure primitive haproxy lsb:haproxy op monitor interval="30s"
    “lsb:haproxy”表示haproxy服务。

    ERROR: lsb:haproxy: got no meta-data, does this RA exist?
    ERROR: lsb:haproxy: got no meta-data, does this RA exist?
    ERROR: lsb:haproxy: no such resource agent
    Do you still want to commit (y/n)? n

    似乎crm命令不识别“haproxy”服务。查看crm目前能识别哪些服务:
    # crm ra list lsb
    netconsole  network

    netconsole和network位于/etc/rc.d/init.d目录中,是centos7默认情况下仅有的服务脚本,推测在此目录创建haproxy的服务脚本即可(每个节点均要):
    # vi /etc/rc.d/init.d/haproxy

    内容如下:
    #!/bin/bash

    case "$1" in
      start)
            systemctl start haproxy.service
            ;;
      stop)
            systemctl stop haproxy.service
            ;;
      status)
            systemctl status -l haproxy.service
            ;;
      restart)
            systemctl restart haproxy.service
            ;;
      *)
            echo '$1 = start|stop|status|restart'
            ;;
    esac


    记得授予可执行权限:
    chmod 755 /etc/rc.d/init.d/haproxy


    再次确认crm命令是否能识别“haproxy”:
    # crm ra list lsb
    haproxy     netconsole  network
    已经有了haproxy,“service haproxy status”命令也能用了,请再次尝试创建haproxy服务资源。


    查看资源状态:
    # crm_mon
    Last updated: Tue Dec  8 11:28:35 2015
    Last change: Tue Dec  8 11:28:28 2015
    Stack: corosync
    Current DC: controller2 (167772172) - partition with quorum
    Version: 1.1.12-a14efad
    2 Nodes configured
    2 Resources configured


    Online: [ controller2 controller3 ]

    myvip   (ocf::heartbeat:IPaddr2):       Started controller2
    haproxy (lsb:haproxy):  Started controller3

    目前haproxy资源在节点controller3上,在controller3上查看haproxy服务状态,已是active:
    # systemctl status -l haproxy.service


    定义HAProxy和VIP必须在同一节点上运行:
    # crm configure colocation haproxy-with-vip INFINITY: haproxy myvip

    定义先接管VIP之后才启动HAProxy:
    # crm configure order haproxy-after-vip mandatory: myvip haproxy

    --------------------------------------------------------------------------------------------
    把HAProxy实例部署在openstack的控制节点上,已经成为一种共识。实例的数量最好是奇数,如3、5等。

    官网完整的haproxy.cfg实例内容如下:
    global
        chroot /var/lib/haproxy
        daemon
        group haproxy
        maxconn 4000
        pidfile /var/run/haproxy.pid
        user haproxy

    defaults
        log global
        maxconn 4000
        option redispatch
        retries 3
        timeout http-request 10s
        timeout queue 1m
        timeout connect 10s
        timeout client 1m
        timeout server 1m
        timeout check 10s

    listen dashboard_cluster
      bind <Virtual IP>:443
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:443 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:443 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:443 check inter 2000 rise 2 fall 5

    listen galera_cluster
        bind <Virtual IP>:3306
        balance source
        option httpchk
        server controller1 10.0.0.4:3306 check port 9200 inter 2000 rise 2 fall 5
        server controller2 10.0.0.5:3306 backup check port 9200 inter 2000 rise 2 fall 5
        server controller3 10.0.0.6:3306 backup check port 9200 inter 2000 rise 2 fall 5

    listen glance_api_cluster
      bind <Virtual IP>:9292
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:9292 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:9292 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:9292 check inter 2000 rise 2 fall 5

    listen glance_registry_cluster
      bind <Virtual IP>:9191
      balance  source
      option  tcpka
      option  tcplog
      server controller1 10.0.0.1:9191 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:9191 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:9191 check inter 2000 rise 2 fall 5

    listen keystone_admin_cluster
      bind <Virtual IP>:35357
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:35357 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:35357 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:35357 check inter 2000 rise 2 fall 5

    listen keystone_public_internal_cluster
      bind <Virtual IP>:5000
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:5000 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:5000 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:5000 check inter 2000 rise 2 fall 5

    listen nova_ec2_api_cluster
      bind <Virtual IP>:8773
      balance  source
      option  tcpka
      option  tcplog
      server controller1 10.0.0.1:8773 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:8773 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:8773 check inter 2000 rise 2 fall 5

    listen nova_compute_api_cluster
      bind <Virtual IP>:8774
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:8774 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:8774 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:8774 check inter 2000 rise 2 fall 5

    listen nova_metadata_api_cluster
      bind <Virtual IP>:8775
      balance  source
      option  tcpka
      option  tcplog
      server controller1 10.0.0.1:8775 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:8775 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:8775 check inter 2000 rise 2 fall 5

    listen cinder_api_cluster
      bind <Virtual IP>:8776
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:8776 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:8776 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:8776 check inter 2000 rise 2 fall 5

    listen ceilometer_api_cluster
      bind <Virtual IP>:8777
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:8777 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:8777 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:8777 check inter 2000 rise 2 fall 5

    listen spice_cluster
      bind <Virtual IP>:6080
      balance  source
      option  tcpka
      option  tcplog
      server controller1 10.0.0.1:6080 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:6080 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:6080 check inter 2000 rise 2 fall 5

    listen neutron_api_cluster
      bind <Virtual IP>:9696
      balance  source
      option  tcpka
      option  httpchk
      option  tcplog
      server controller1 10.0.0.1:9696 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:9696 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:9696 check inter 2000 rise 2 fall 5

    listen swift_proxy_cluster
      bind <Virtual IP>:8080
      balance  source
      option  tcplog
      option  tcpka
      server controller1 10.0.0.1:8080 check inter 2000 rise 2 fall 5
      server controller2 10.0.0.2:8080 check inter 2000 rise 2 fall 5
      server controller3 10.0.0.3:8080 check inter 2000 rise 2 fall 5

  • 相关阅读:
    使用ant部署web项目
    搭建ant环境
    Myeclipse8.5,9.0安装svn插件
    自定义任务扩展 ANT
    使用jmeter测试 webservice
    随机数猜1-9的数字
    线性表
    显示所有线性元素
    新学到的继承链
    计算圆
  • 原文地址:https://www.cnblogs.com/gzxbkk/p/6856436.html
Copyright © 2020-2023  润新知