本实验采用不同类型的节点集群(client x1, master x3, data x2)
docker
run -d --restart=always
cat
>elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.host:
network.publish_host: 192.168.8.10
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
docker
restart
直接将修改好的配置文件cp到容器对应位置后重启容器
man docker-run
说明:docker默认的网络模式为bridge,会自动为容器分配一个私有地址,如果是多台宿主机之间集群通信需要借助Consul,Etcd,Doozer等服务发现,自动注册组件来协同。请参看Docker集群之Swarm+Consul+Shipyard
必须通过network.publish_host: 192.168.8.10参数指定elasticsearch节点对外的监听地址,非常重要,不指定的话,集群节点间无法正常通信,报错如下
[2016-06-21 05:50:19,123][INFO ][discovery.zen
] [consul-s2.example.com] failed to send
join request to master
[{consul-s1.example.com}{DeKixlVMS2yoynzX8Y-gdA}{172.17.0.1}{172.17.0.1:9300}{data=false,
master=true}], reason
[RemoteTransportException
最简单的,还可以网络直接设为host模式--net=host,直接借用宿主机的网络接口,换言之,不做网络层的layer
同时可禁用OOM,还可根据宿主机内存来设置-m参数(默认为0,无限)限制容器内存大小
[root@ela-client ~]# docker ps
CONTAINER ID
762e4d21aaf8
[root@ela-client ~]# netstat -tunlp|grep java
tcp
tcp
[root@ela-client ~]# ls /opt/elasticsearch/
data
[root@ela-client ~]# docker logs $(docker ps -q)
[2016-06-13 16:09:51,308][INFO ][node
[2016-06-13 16:09:51,311][INFO ][node
... ...
[2016-06-13 16:09:56,408][INFO ][node
[2016-06-13 16:09:56,417][INFO
][gateway
cat >/opt/elasticsearch/config/elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.host:
network.publish_host: 192.168.8.10
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.host:
network.publish_host: 192.168.8.101
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.host:
network.publish_host: 192.168.8.102
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.publish_host: 192.168.8.103
transport.tcp.port: 9300
network.host:
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.publish_host: 192.168.8.201
transport.tcp.port: 9300
network.host:
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml
cluster.name:
node.name: ${HOSTNAME}
node.master:
node.data:
path.data:
path.logs:
bootstrap.mlockall: true
network.host:
network.publish_host: 192.168.8.202
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
通过discovery模块来将节点加入集群
在以上节点的配置文件/opt/elasticsearch/config/elasticsearch.yml中加入如下行后重启
cat >>/opt/elasticsearch/config/elasticsearch.yml <<HERE
discovery.zen.ping.timeout: 100s
discovery.zen.fd.ping_timeout: 100s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
discovery.zen.minimum_master_nodes:
gateway.recover_after_nodes:
HERE
docker restart $(docker ps -a|grep elasticsearch|awk '{print $1}')
五.确认集群
等待30s左右,集群节点会自动join,在集群的含意节点上都会看到如下类似输出,说明集群运行正常
REST API调用请参看Elasticsearch REST API小记
https://www.elastic.co/guide/en/elasticsearch/reference/current/_cluster_health.html
[root@ela-client ~]#curl 'http://localhost:9200/_cat/health?v'
epoch
1465843145
18:39:05
[root@ela-client ~]#curl 'localhost:9200/_cat/nodes?v'
host
192.168.8.102 192.168.8.102
192.168.8.103
192.168.8.103
192.168.8.202
192.168.8.202
192.168.8.10
192.168.8.201
192.168.8.201
192.168.8.101
192.168.8.101
[root@ela-master2 ~]#curl 'http://localhost:9200/_nodes/process?pretty'
{
}
问题:
修改jvm内存,初始值-Xms256m -Xmx1g
https://hub.docker.com/r/itzg/elasticsearch/
https://hub.docker.com/_/elasticsearch/
实测:修改JAVA_OPTS,ES_JAVA_OPTS都只能追加,而不能覆盖,只能通过
-e ES_HEAP_SIZE="32g" 来覆盖