• CentOS7安装MySQL、Redis、RabbitMQ


    系统版本

    CentOS Linux release 7.2.1511 (Core)

    MySQL安装

    一、下载mysql的repo源

    wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
    

    二、安装mysql-community-release-el7-5.noarch.rpm包

    rpm -ivh mysql-community-release-el7-5.noarch.rpm
    

    安装这个包后,会获得两个MySQL的yum repo源:

    /etc/yum.repos.d/mysql-community.repo
    /etc/yum.repos.d/mysql-community-source.repo
    

    三、安装mysql

    sudo yum install mysql-server
    

    四、启动MySQL

    systemctl start mysql.service
    # 开机自启动
    systemctl enable mysql.service
    

    五、修改MySQL默认密码

    MySQL默认ROOT密码为空,修改为123456

    mysqladmin -uroot password '123456'
    

    Redis安装

    一、Download, extract and compile Redis with:

    wget http://download.redis.io/releases/redis-3.2.4.tar.gz
    tar xzf redis-3.2.4.tar.gz
    cd redis-3.2.4
    make
    

    二、启动redis:

    nohup src/redis-server &
    

    三、做个软链:

    ln -s src/redis-cli /usr/bin/
    ln -s src/redis-server /usr/bin
    

    四、验证一下:

    src/redis-cli
    redis> set foo bar
    OK
    redis> get foo
    "bar"
    

    RabbitMQ安装

    运行用户:rabbitmq

    YUM配置

    wget https://mirrors.tuna.tsinghua.edu.cn/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
    rpm -Uvh epel-release-7-8.noarch.rpm
    

    安装软件包

    yum -y install erlang librabbitmq* rabbitmq-server.noarch
    

    修改ulimit配置

    echo '''* soft nofile 655350
    * hard nofile 655350
    * soft stack 10240
    * hard stack 10240
    * soft nproc 655350
    * hard nproc 655350''' >> /etc/security/limits.conf
    

    修改配置

    [root@A08-R16-I255-82 rabbitmq]# cat /etc/rabbitmq/rabbitmq.config
    [
    {rabbit, [{cluster_nodes, {['rabbit@A08-R16-I255-82.SQ01.SQGOV.LOCAL', 'rabbit@A08-R16-I255-83.SQ01.SQGOV.LOCAL', 'rabbit@A08-R16-I255-84.SQ01.SQGOV.LOCAL'], disc}},
    {cluster_partition_handling, pause_minority}]},
    {kernel, [{inet_dist_listen_min, 9100},{inet_dist_listen_max, 9200}]},
    {rabbitmq_management, [{listener, [{port, 15672}, {ip, "10.239.253.82"}]}]}
    ]
    [root@A08-R16-I255-82 ~]# cat /etc/rabbitmq/rabbitmq-env.conf 
    ERL_EPMD_ADDRESS=10.239.253.82
    ERL_EPMD_PORT=4369
    HOME=/export/rabbitmq
    SNAME=rabbit
    RABBITMQ_NODE_IP_ADDRESS=10.239.253.82
    RABBITMQ_NODE_PORT=5672
    HOSTNAME=A08-R16-I255-82.SQ01.SQGOV.LOCAL
    RABBITMQ_NODENAME=rabbit@A08-R16-I255-82
    RABBITMQ_CONFIG_FILE=/etc/rabbitmq/rabbitmq.config
    RABBITMQ_MNESIA_BASE=/export/rabbitmq
    RABBITMQ_MNESIA_DIR=/export/rabbitmq/mnesia
    RABBITMQ_PLUGINS_DIR=/export/rabbitmq/plugins
    RABBITMQ_LOG_BASE=/export/log/rabbitmq
    RABBITMQ_USE_LONGNAME=true
    另外两台服务器的配置修改一下IP和主机名就可以
    

    创建相关目录

    mkdir -p /export/rabbitmq/{mnesia,plugins} && chown rabbitmq.rabbitmq /export/rabbitmq -R
    mkdir -p /export/log/rabbitmq && chown rabbitmq.rabbitmq /export/log/rabbitmq -R
    

    启动rabbitmq服务

    systemctl start rabbitmq-server.service 
    

    启用rabbitmq web管理插件

    rabbitmq-plugins enable rabbitmq_management
    

    RabbitMQ集群配置

    PreDeploy

    1. 确保各节点已安装好 erlang、RabbitMQ.
    2. 修改hosts文件:
    192.168.1.1 rabbit1
    192.168.1.2 rabbit2
    192.168.1.3 rabbit3
    

    Rabbitmq的集群是依赖于erlang的集群来工作的,所以必须先构建起erlang的集群环境。

    Erlang的集群中各节点是通过一个magic cookie来实现的,这个cookie存放在 /var/lib/rabbitmq/.erlang.cookie 中,文件是400的权限。

    所以必须保证各节点cookie保持一致,否则节点之间就无法通信。

    cat /var/lib/rabbitmq/.erlang.cookie
    chown -R rabbitmq.rabbitmq /var/lib/rabbitmq/.erlang.cookie
    

    停止所有节点RabbitMQ服务,然后使用detached参数独立运行:

    # 在所有节点上都要执行
    rabbitmqctl stop
    rabbitmq-server -detached
    

    组建集群:将node2、node3与node1组成集群

    rabbit2 # rabbitmqctl stop_app
    rabbit2 # rabbitmqctl join_cluster rabbit@rabbit1         # 此处组建集群不成功,把rabbit1改成本机默认的hostname即可。
    rabbit2 # rabbitmqctl start_app
    rabbit3 # rabbitmqctl stop_app
    rabbit3 # rabbitmqctl join_cluster rabbit@rabbit1
    rabbit3 # rabbitmqctl start_app
    

    此时 node2 与 node3 也会自动建立连接;

    # 在每个节点上查看集群各节点状态:
    rabbitmqctl cluster_status
    

    设置镜像队列策略,在任意节点上执行,如果不设置默认为集群策略:

    rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
    

    将所有队列设置为镜像队列,即队列会被复制到各个节点,各个节点状态保持一致。

    到此为止,RabbitMQ 高可用集群就已经搭建完成。

    账号管理

    添加普通账号(仅有生产和消费权限)

    rabbitmqctl add_user username userpassword
    

    设置角色

    rabbitmqctl set_user_tags jdcloud administrator
    

    设置权限

    rabbitmqctl set_permissions -p '/' jcloud ".*" ".*" ".*"
    

    在任何节点添加,然后会自动同步到集群其它节点

    设置开机启动

    systemctl enable rabbitmq-server.service
    

    关于节点类型(ram |disk)

    创建RAM节点

    rabbit2# rabbitmqctl stop_app
    Stopping node rabbit@rabbit2 ...done.
    rabbit2# rabbitmqctl join_cluster --ram rabbit@rabbit1
    Clustering node rabbit@rabbit2 with [rabbit@rabbit1] ...done.
    rabbit2# rabbitmqctl start_app
    Starting node rabbit@rabbit2 ...done.
    

    修改节点类型

    rabbit2# rabbitmqctl stop_app
    Stopping node rabbit@rabbit2 ...done.
    rabbit2# rabbitmqctl change_cluster_node_type disc
    Turning rabbit@rabbit2 into a disc node ...
    ...done.
    Starting node rabbit@rabbit2 ...done.
    rabbit1# rabbitmqctl stop_app
    Stopping node rabbit@rabbit1 ...done.
    rabbit1# rabbitmqctl change_cluster_node_type ram
    Turning rabbit@rabbit1 into a ram node ...
    rabbit1# rabbitmqctl start_app
    Starting node rabbit@rabbit1 ...done.
    

    安装HAProxy负载均衡器

    yum install haproxy
    

    修改配置文件:

    /etc/haproxy/haproxy.cfg

    defaults
        mode                    http
        log                     global
        option                  httplog
        option                  dontlognull
        option http-server-close
        option forwardfor       except 127.0.0.0/8
        option                  redispatch
        retries                 3
        timeout http-request    10s
        timeout queue           1m
        timeout connect         10s
        timeout client          1m
        timeout server          1m
        timeout http-keep-alive 10s
        timeout check           10s
        maxconn                 3000
      
        listen rabbitmq_cluster 0.0.0.0:5672
        mode tcp
        balance roundrobin
        server   rqslave1 172.16.3.107:5672 check inter 2000 rise 2 fall 3  
        server   rqslave2 172.16.3.108:5672 check inter 2000 rise 2 fall 3
        server   rqmaster 172.16.3.32:5672 check inter 2000 rise 2 fall 3
    

    负载均衡器会监听5672端口,轮询我们的两个内存节点172.16.3.107、172.16.3.108的5672端口,172.16.3.32为磁盘节点,只做备份不提供给生产者、消费者使用,当然如果我们服务器资源充足情况也可以配置多个磁盘节点,这样磁盘节点除了故障也不会影响,除非同时出故障。

    重启集群节点

    Nodes that have been joined to a cluster can be stopped at any time. It is also ok for them to crash. In both cases the rest of the cluster continues operating unaffected, and the nodes automatically "catch up" with the other cluster nodes when they start up again.

    We shut down the nodes rabbit@rabbit1 and rabbit@rabbit3 and check on the cluster status at each step:

    rabbit1# rabbitmqctl stop
    Stopping and halting node rabbit@rabbit1 ...done.
    rabbit2# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit2 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit3,rabbit@rabbit2]}]
    ...done.
    rabbit3# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit3 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit2,rabbit@rabbit3]}]
    ...done.
    rabbit3# rabbitmqctl stop
    Stopping and halting node rabbit@rabbit3 ...done.
    rabbit2# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit2 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit2]}]
    ...done.
    Now we start the nodes again, checking on the cluster status as we go along:
    rabbit1# rabbitmq-server -detached
    rabbit1# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit1 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
    ...done.
    rabbit2# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit2 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
    ...done.
    rabbit3# rabbitmq-server -detached
    rabbit1# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit1 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
    ...done.
    rabbit2# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit2 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
    ...done.
    rabbit3# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit3 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},
     {running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
    ...done.
    

    There are some important caveats:

    When the entire cluster is brought down, the last node to go down must be the first node to be brought online. If this doesn't happen, the nodes will wait 30 seconds for the last disc node to come back online, and fail afterwards. If the last node to go offline cannot be brought back up, it can be removed from the cluster using the forget_cluster_node command - consult the rabbitmqctl manpage for more information.

    If all cluster nodes stop in a simultaneous and uncontrolled manner (for example with a power cut) you can be left with a situation in which all nodes think that some other node stopped after them. In this case you can use the force_boot command on one node to make it bootable again - consult the rabbitmqctl manpage for more information.

    Breaking up a cluster

    Nodes need to be removed explicitly from a cluster when they are no longer meant to be part of it. We first remove rabbit@rabbit3 from the cluster, returning it to independent operation. To do that, on rabbit@rabbit3 we stop the RabbitMQ application, reset the node, and restart the RabbitMQ application.

    rabbit3# rabbitmqctl stop_app
    Stopping node rabbit@rabbit3 ...done.
    rabbit3# rabbitmqctl reset
    Resetting node rabbit@rabbit3 ...done.
    rabbit3# rabbitmqctl start_app
    Starting node rabbit@rabbit3 ...done.
    Note that it would have been equally valid to list rabbit@rabbit3 as a node.
    Running the cluster_status command on the nodes confirms that rabbit@rabbit3 now is no longer part of the cluster and operates independently:
    rabbit1# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit1 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},
     {running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
    ...done.
    rabbit2# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit2 ...
    [{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},
     {running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
    ...done.
    rabbit3# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit3 ...
    [{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
    ...done.
    We can also remove nodes remotely. This is useful, for example, when having to deal with an unresponsive node. We can for example remove rabbit@rabbi1 from rabbit@rabbit2.
    rabbit1# rabbitmqctl stop_app
    Stopping node rabbit@rabbit1 ...done.
    rabbit2# rabbitmqctl forget_cluster_node rabbit@rabbit1
    Removing node rabbit@rabbit1 from cluster ...
    ...done.
    Note that rabbit1 still thinks its clustered with rabbit2, and trying to start it will result in an error. We will need to reset it to be able to start it again.
    rabbit1# rabbitmqctl start_app
    Starting node rabbit@rabbit1 ...
    Error: inconsistent_cluster: Node rabbit@rabbit1 thinks it's clustered with node rabbit@rabbit2, but rabbit@rabbit2 disagrees
    rabbit1# rabbitmqctl reset
    Resetting node rabbit@rabbit1 ...done.
    rabbit1# rabbitmqctl start_app
    Starting node rabbit@mcnulty ...
    ...done.
    The cluster_status command now shows all three nodes operating as independent RabbitMQ brokers:
    rabbit1# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit1 ...
    [{nodes,[{disc,[rabbit@rabbit1]}]},{running_nodes,[rabbit@rabbit1]}]
    ...done.
    rabbit2# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit2 ...
    [{nodes,[{disc,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2]}]
    ...done.
    rabbit3# rabbitmqctl cluster_status
    Cluster status of node rabbit@rabbit3 ...
    [{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
    ...done.
    Note that rabbit@rabbit2 retains the residual state of the cluster, whereas rabbit@rabbit1 and rabbit@rabbit3 are freshly initialised RabbitMQ brokers. If we want to re-initialise rabbit@rabbit2 we follow the same steps as for the other nodes:
    rabbit2# rabbitmqctl stop_app
    Stopping node rabbit@rabbit2 ...done.
    rabbit2# rabbitmqctl reset
    Resetting node rabbit@rabbit2 ...done.
    rabbit2# rabbitmqctl start_app
    Starting node rabbit@rabbit2 ...done.
    
  • 相关阅读:
    ue4 材质表达式分类
    UE4材质特别属生记录
    tangent space与object space
    better-scroll插件 api
    better-scroll 与 Vue 结合
    git 简介
    vue 插件
    前端小程序——js+canvas 给图片添加水印
    使用Node.js给图片加水印的方法
    Vue框架 周期
  • 原文地址:https://www.cnblogs.com/huyuedong/p/5982010.html
Copyright © 2020-2023  润新知