需要先部署jdk环境
-
这次通过手工部署的方式, 先上传jdk的tar包
[root@iz8vb6evwfagx3tyjx4fl8z soft]# ll total 189496 -rw-r--r-- 1 root root 194042837 Apr 8 14:11 jdk-8u202-linux-x64.tar.gz
-
解压到指定目录
mkdir -p /opt/test/java tar -zxvf jdk-8u202-linux-x64.tar.gz -C /opt/test/java
-
vim /etc/profile 修改环境变量添加jdk环境
JAVA_HOME=/opt/test/java/jdk1.8.0_202 CLASSPATH=$JAVA_HOME/lib/ PATH=$PATH:$JAVA_HOME/bin export PATH JAVA_HOME CLASSPATH
-
source /etc/profile 使配置生效
-
查看jdk版本
[root@iz8vb6evwfagx3tyjx4fl8z soft]# java -version java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)
搭建Zookeeper集群
-
先创建节点文件夹
cd /opt/test/ mkdir cluster/node01 -p && mkdir cluster/node02 -p && mkdir cluster/node03 -p
-
设定机器ip
machine_ip=121.89.209.190
-
运行节点1
docker run -d -p 2181:2181 -p 2887:2888 -p 3887:3888 --name zookeeper_node01 --restart always -v $PWD/cluster/node01/volume/data:/data -v $PWD/cluster/node01/volume/datalog:/datalog -e "TZ=Asia/Shanghai" -e "ZOO_MY_ID=1" -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=$machine_ip:2888:3888 server.3=$machine_ip:2889:3889" zookeeper:3.4.13
-
运行节点2
docker run -d -p 2182:2181 -p 2888:2888 -p 3888:3888 --name zookeeper_node02 --restart always -v $PWD/cluster/node02/volume/data:/data -v $PWD/cluster/node2/volume/datalog:/datalog -e "TZ=Asia/Shanghai" -e "ZOO_MY_ID=2" -e "ZOO_SERVERS=server.1=$machine_ip:2887:3887 server.2=0.0.0.0:2888:3888 server.3=$machine_ip:2889:3889" zookeeper:3.4.13
-
运行节点3
docker run -d -p 2183:2181 -p 2889:2888 -p 3889:3888 --name zookeeper_node03 --restart always
-v $PWD/cluster/node03/volume/data:/data
-v $PWD/cluster/node03/volume/datalog:/datalog
-e "TZ=Asia/Shanghai"
-e "ZOO_MY_ID=3"
-e "ZOO_SERVERS=server.1=$machine_ip:2887:3887 server.2=$machine_ip:2888:3888 server.3=0.0.0.0:2888:3888"
zookeeper:3.4.13
-
查看docker镜像日志
docker logs -f 容器ID
-
然后你会发现在报连接错误
java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:534) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:454) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:435) at java.lang.Thread.run(Thread.java:748) 2020-04-08 16:00:44,614 [myid:1] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 1, my id = 1, error = java.io.EOFException
-
原因是 默认的Docker网络模式下,通过宿主机的IP+映射端口, 而node01找不到node02和node03
-
-
查找每个容器的ip
docker inspect 容器ID
- node01: 172.17.0.2
- node02: 172.17.0.3
- node03: 172.17.0.4
-
我们知道了它有自己的IP,那又出现另一个问题了,就是它的ip是动态的,启动之前我们无法得知。有个解决办法就是创建自己的bridge网络,然后创建容器的时候指定ip。
-
所以以上全部要推倒重来......
-
停止并删除所有镜像
docker stop $(docker ps -a -q) docker rm $(docker ps -a -q)
[重新开始]
-
创建自己的桥接网络
docker network create --driver bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 zoonet
-
查看docker网络
[root@iz8vb6evwfagx3tyjx4fl8z ~]# docker network ls NETWORK ID NAME DRIVER SCOPE a121ed854d1c bridge bridge local ab9083cbac8a host host local 4d3012b89f70 none null local 26b8cbf5b4c9 zoonet bridge local
-
检查桥接网络
docker network inspect 26b8cbf5b4c9
-
查询结果
[ { "Name": "zoonet", "Id": "26b8cbf5b4c9d086b81edc22f4627de5ef71a8745374554b440d394ad40858f4", "Created": "2020-04-08T16:25:00.982635799+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
-
-
修改Zookeeper容器的创建命令
-
运行节点1
docker run -d -p 2181:2181 --name zookeeper_node01 --privileged --restart always --network zoonet --ip 172.18.0.2 -v /opt/test/cluster/node01/volume/data:/data -v /opt/test/cluster/node01/volume/data/datalog:/datalog -v /opt/test/cluster/node01/volume/data/logs:/logs -e ZOO_MY_ID=1 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72 #(这个是Zookeeper镜像的ip)
-
运行节点2
docker run -d -p 2182:2181 --name zookeeper_node02 --privileged --restart always --network zoonet --ip 172.18.0.3 -v /opt/test/cluster/node02/volume/data:/data -v /opt/test/cluster/node02/volume/datalog:/datalog -v /opt/test/cluster/node02/volume/logs:/logs -e ZOO_MY_ID=2 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72
-
运行节点3
docker run -d -p 2183:2181 --name zookeeper_node03 --privileged --restart always --network zoonet --ip 172.18.0.4 -v /opt/test/cluster/node03/volume/data:/data -v /opt/test/cluster/node03/volume/datalog:/datalog -v /opt/test/cluster/node03/volume/logs:/logs -e ZOO_MY_ID=3 -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72
-
查看容器
[root@iz8vb6evwfagx3tyjx4fl8z ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 82753d13ac44 4ebfb9474e72 "/docker-entrypoint.…" 21 seconds ago Up 21 seconds 2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp zookeeper_node03 eee56297eb96 4ebfb9474e72 "/docker-entrypoint.…" 42 seconds ago Up 41 seconds 2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp zookeeper_node02 ee8a9710fa3e 4ebfb9474e7 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper_node01
-
这时候再查看容器日志
docker logs -f 容器ID
- 没有报错
-
这时候我们再进入容器检查一下
# node01 [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it ee8a9710fa3e bash bash-4.4# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: follower # node02 [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it eee56297eb96 bash bash-4.4# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: leader # node03 [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it 82753d13ac44 bash bash-4.4# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: follower
- 各节点状态良好, 集群搭建完成。
-