• Docker部署Elasticsearch集群


    Docker部署Elasticsearch集群
    参考文档:



    环境:
    CentOS 7.2
    docker-engine-1.11.2
    elasticsearch-2.3.3


    前言:
    虚拟机节点部署请参看Elasticsearch 负载均衡集群,这里简单介绍下docker部署

    本实验采用不同类型的节点集群(client x1, master x3, data x2)

    ela-client.example.com:192.168.8.10(client node)
    ela-master1.example.com:192.168.8.101(master node)
    ela-master2.example.com:192.168.8.102(master node)
    ela-master3.example.com:192.168.8.103(master node)
    ela-data1.example.com:192.168.8.201(data node)
    ela-data2.example.com:192.168.8.202(data node)


    一.安装docker-engine(所有节点)
    docker镜像加速请参看Docker Hub加速及私有镜像搭建


    二.获取elasticsearch docker镜像
    docker pull elasticsearch:2.3.3

    测试运行elasticsearch
    docker run -d -p 9200:9200 -p 9300:9300 --name=elasticsearch-test elasticsearch:2.3.3
    查看容器配置
    docker inspect elasticsearch-test
    删除测试容器
    docker rm -f $(docker ps -aq)
    提示:对于持续增长的目录或文件可以通过映射到docker主机来提高可定制性,如data,logs


    三.配置并启动节点
    创建集群名为elasticsearch_cluster的集群(默认的集群名为elasticsearch)
    A.client node
    ela-client.example.com:192.168.8.10(client node)

    docker run -d --restart=always -p 9200:9200 -p 9300:9300 --name=elasticsearch-client --oom-kill-disable=true --memory-swappiness=1 -v /opt/elasticsearch/data:/usr/share/elasticsearch/data -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs elasticsearch:2.3.3

    cat >elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: false

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.host: 0.0.0.0

    network.publish_host: 192.168.8.10

    transport.tcp.port: 9300

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker cp elasticsearch.yml elasticsearch-client:/usr/share/elasticsearch/config/elasticsearch.yml

    docker restart elasticsearch-client

    直接将修改好的配置文件cp到容器对应位置后重启容器

    man docker-run

           --net="bridge"

              Set the Network mode for the container

                                          'bridge': create a network stack on the default Docker bridge

                                          'none': no networking

                                          'container:': reuse another container's network stack

                                          'host': use the Docker host network stack. Note: the host mode gives the container full access to local system services such as D-bus and is  therefore  consid‐

           ered insecure.

    说明:docker默认的网络模式为bridge,会自动为容器分配一个私有地址,如果是多台宿主机之间集群通信需要借助Consul,Etcd,Doozer等服务发现,自动注册组件来协同。请参看Docker集群之Swarm+Consul+Shipyard


    必须通过network.publish_host: 192.168.8.10参数指定elasticsearch节点对外的监听地址,非常重要,不指定的话,集群节点间无法正常通信,报错如下

    [2016-06-21 05:50:19,123][INFO ][discovery.zen ] [consul-s2.example.com] failed to send join request to master [{consul-s1.example.com}{DeKixlVMS2yoynzX8Y-gdA}{172.17.0.1}{172.17.0.1:9300}{data=false, master=true}], reason [RemoteTransportException[[consul-s2.example.com][172.17.0.1:9300]

    最简单的,还可以网络直接设为host模式--net=host,直接借用宿主机的网络接口,换言之,不做网络层的layer

    同时可禁用OOM,还可根据宿主机内存来设置-m参数(默认为0,无限)限制容器内存大小

    [root@ela-client ~]# docker ps

    CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES

    762e4d21aaf8        elasticsearch:2.3.3   "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes                            elasticsearch-client

    [root@ela-client ~]# netstat -tunlp|grep java

    tcp             0 0.0.0.0:9200            0.0.0.0:*               LISTEN      18952/java          

    tcp             0 0.0.0.0:9300            0.0.0.0:*               LISTEN      18952/java          

    [root@ela-client ~]# ls /opt/elasticsearch/

    data  logs

    [root@ela-client ~]# docker logs $(docker ps -q)

    [2016-06-13 16:09:51,308][INFO ][node                     ] [Sunfire] version[2.3.3], pid[1], build[218bdf1/2016-05-17T15:40:04Z]

    [2016-06-13 16:09:51,311][INFO ][node                     ] [Sunfire] initializing ...

    ... ...

    [2016-06-13 16:09:56,408][INFO ][node                     ] [Sunfire] started

    [2016-06-13 16:09:56,417][INFO ][gateway                  ] [Sunfire] recovered [0] indices into cluster_state


    或者
    卷映射,个人认为卷映射更便于管理与备份
    mkdir -p /opt/elasticsearch/config

    cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: false

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.host: 0.0.0.0

    network.publish_host: 192.168.8.10

    transport.tcp.port: 9300

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker run -tid --restart=always 
        -p 9200:9200
        -p 9300:9300
        --oom-kill-disable=true
        --memory-swappiness=1
        -v /opt/elasticsearch/data:/usr/share/elasticsearch/data
        -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
        -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
        --name=elasticsearch-client
        elasticsearch:2.3.3

    B.master node
    ela-master1.example.com:192.168.8.101(master node)
    mkdir -p /opt/elasticsearch/config

    cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: true

    node.data: false

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.host: 0.0.0.0

    network.publish_host: 192.168.8.101

    transport.tcp.port: 9300

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker run -tid --restart=always 
        -p 9200:9200
        -p 9300:9300
        --oom-kill-disable=true
        --memory-swappiness=1
        -v /opt/elasticsearch/data:/usr/share/elasticsearch/data
        -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
        -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
        --name=elasticsearch-master1 
        elasticsearch:2.3.3
    ela-master2.example.com:192.168.8.102(master node)
    mkdir -p /opt/elasticsearch/config

    cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: true

    node.data: false

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.host: 0.0.0.0

    network.publish_host: 192.168.8.102

    transport.tcp.port: 9300

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker run -tid --restart=always 
        -p 9200:9200
        -p 9300:9300
        --oom-kill-disable=true
        --memory-swappiness=1
        -v /opt/elasticsearch/data:/usr/share/elasticsearch/data
        -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
        -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
        --name=elasticsearch-master2 
        elasticsearch:2.3.3
    ela-master3.example.com:192.168.8.103(master node)
    mkdir -p /opt/elasticsearch/config

    cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: true

    node.data: false

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.publish_host: 192.168.8.103

    transport.tcp.port: 9300

    network.host: 0.0.0.0

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker run -tid --restart=always 
        -p 9200:9200
        -p 9300:9300
        --oom-kill-disable=true
        --memory-swappiness=1
        -v /opt/elasticsearch/data:/usr/share/elasticsearch/data
        -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
        -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
        --name=elasticsearch-master3 
        elasticsearch:2.3.3

    C.data node
    ela-data1.example.com:192.168.8.201(data node)
    mkdir -p /opt/elasticsearch/config

    cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: true

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.publish_host: 192.168.8.201

    transport.tcp.port: 9300

    network.host: 0.0.0.0

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker run -tid --restart=always 
        -p 9200:9200
        -p 9300:9300
        --oom-kill-disable=true
        --memory-swappiness=1
        -v /opt/elasticsearch/data:/usr/share/elasticsearch/data
        -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
        -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
        --name=elasticsearch-data1 
        elasticsearch:2.3.3
    ela-data2.example.com:192.168.8.202(data node)
    mkdir -p /opt/elasticsearch/config

    cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: true

    path.data: /usr/share/elasticsearch/data

    path.logs: /usr/share/elasticsearch/logs

    bootstrap.mlockall: true

    network.host: 0.0.0.0

    network.publish_host: 192.168.8.202

    transport.tcp.port: 9300

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    docker run -tid --restart=always 
        -p 9200:9200
        -p 9300:9300
        --oom-kill-disable=true
        --memory-swappiness=1
        -v /opt/elasticsearch/data:/usr/share/elasticsearch/data
        -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
        -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
        --name=elasticsearch-data2 
        elasticsearch:2.3.3


    四.配置集群(所有节点)

    通过discovery模块来将节点加入集群

    在以上节点的配置文件/opt/elasticsearch/config/elasticsearch.yml中加入如下行后重启

    cat >>/opt/elasticsearch/config/elasticsearch.yml <<HERE

    discovery.zen.ping.timeout: 100s

    discovery.zen.fd.ping_timeout: 100s

    discovery.zen.ping.multicast.enabled: false

    discovery.zen.ping.unicast.hosts: ["192.168.8.101:9300", "192.168.8.102:9300", "192.168.8.103:9300", "192.168.8.201:9300", "192.168.8.202:9300","192.168.8.10:9300"]

    discovery.zen.minimum_master_nodes: 2

    gateway.recover_after_nodes: 2

    HERE

    docker restart $(docker ps -a|grep elasticsearch|awk '{print $1}')



    五.确认集群

    等待30s左右,集群节点会自动join,在集群的含意节点上都会看到如下类似输出,说明集群运行正常

    REST API调用请参看Elasticsearch REST API小记

    https://www.elastic.co/guide/en/elasticsearch/reference/current/_cluster_health.html

    [root@ela-client ~]#curl 'http://localhost:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1465843145 18:39:05  elasticsearch_cluster green           6             0             0                                           100.0% 

    [root@ela-client ~]#curl 'localhost:9200/_cat/nodes?v'

    host            ip              heap.percent ram.percent load node.role master name                   

    192.168.8.102 192.168.8.102           14          99 0.00 -         *      ela-master2.example.com  

    192.168.8.103 192.168.8.103            4          99 0.14 -         m      ela-master3.example.com  

    192.168.8.202 192.168.8.202           11          99 0.00 d         -      ela-data2.example.com  

    192.168.8.10  192.168.8.10            10          98 0.17 -         -      ela-client.example.com 

    192.168.8.201 192.168.8.201           11          99 0.00 d         -      ela-data1.example.com  

    192.168.8.101 192.168.8.101           12          99 0.01 -         m      ela-master1.example.com 

    [root@ela-master2 ~]#curl 'http://localhost:9200/_nodes/process?pretty'

    {

      "cluster_name" : "elasticsearch_cluster",

      "nodes" : {

        "naMz_y4uRRO-FzyxRfTNjw" : {

          "name" : "ela-data2.example.com",

          "transport_address" : "192.168.8.202:9300",

          "host" : "192.168.8.202",

          "ip" : "192.168.8.202",

          "version" : "2.3.3",

          "build" : "218bdf1",

          "http_address" : "192.168.8.202:9200",

          "attributes" : {

            "master" : "false"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1,

            "mlockall" : false

          }

        },

        "7FwFY20ESZaRtIWhYMfDAg" : {

          "name" : "ela-data1.example.com",

          "transport_address" : "192.168.8.201:9300",

          "host" : "192.168.8.201",

          "ip" : "192.168.8.201",

          "version" : "2.3.3",

          "build" : "218bdf1",

          "http_address" : "192.168.8.201:9200",

          "attributes" : {

            "master" : "false"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1,

            "mlockall" : false

          }

        },

        "X0psLpQyR42A4ThiP8ilhA" : {

          "name" : "ela-master3.example.com",

          "transport_address" : "192.168.8.103:9300",

          "host" : "192.168.8.103",

          "ip" : "192.168.8.103",

          "version" : "2.3.3",

          "build" : "218bdf1",

          "http_address" : "192.168.8.103:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1,

            "mlockall" : false

          }

        },

        "MG_GlSAZRkqLq8gMqaZITw" : {

          "name" : "ela-master1.example.com",

          "transport_address" : "192.168.8.101:9300",

          "host" : "192.168.8.101",

          "ip" : "192.168.8.101",

          "version" : "2.3.3",

          "build" : "218bdf1",

          "http_address" : "192.168.8.101:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1,

            "mlockall" : false

          }

        },

        "YxNHUPqVRNK3Liilw_hU9A" : {

          "name" : "ela-master2.example.com",

          "transport_address" : "192.168.8.102:9300",

          "host" : "192.168.8.102",

          "ip" : "192.168.8.102",

          "version" : "2.3.3",

          "build" : "218bdf1",

          "http_address" : "192.168.8.102:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1,

            "mlockall" : false

          }

        },

        "zTKJJ4ipQg6xAcwy1aE-9g" : {

          "name" : "ela-client.example.com",

          "transport_address" : "192.168.8.10:9300",

          "host" : "192.168.8.10",

          "ip" : "192.168.8.10",

          "version" : "2.3.3",

          "build" : "218bdf1",

          "http_address" : "192.168.8.10:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1,

            "mlockall" : false

          }

        }

      } 

    }



    问题:

    修改jvm内存,初始值-Xms256m -Xmx1g

    https://hub.docker.com/r/itzg/elasticsearch/

    https://hub.docker.com/_/elasticsearch/


    实测:修改JAVA_OPTS,ES_JAVA_OPTS都只能追加,而不能覆盖,只能通过

    -e ES_HEAP_SIZE="32g" 来覆盖

    Docker部署Elasticsearch集群


  • 相关阅读:
    禅道 之 项目开发必备
    Cmd 命令大全
    Php 性能参数优化 及 Iptables 防火墙限制用户访问平率
    Nginx 性能参数优化
    Mysql 性能调优参数
    Postfix的工作原理
    python三次输入错误验证登录
    python shopping incomplete code
    MySQL + Atlas --- 部署读写分离
    网站流量分析项目day03
  • 原文地址:https://www.cnblogs.com/lixuebin/p/10814052.html
Copyright © 2020-2023  润新知