与docker搭建hadoop不一样的是,本次采用Dockerfile的方式构建镜像,然后启动镜像,形成多主机多容器的方式启动Hadoop集群。
1.在各个宿主机上搭建网桥weave
curl -L git.io/weave -o /usr/local/bin/weave
weave launch
2.各个宿主机通过weave网桥链接
192.168.130.166主机上操作:
weave connect 192.168.130.167
weave connect 192.168.130.168
192.168.130.167主机上操作:
weave connect 192.168.130.166
weave connect 192.168.130.168
192.168.130.168主机上操作:
weave connect 192.168.130.166
weave connect 192.168.130.167
3.各个宿主机上创建子网段,使得同一网段的hadoop容器节点能互相通信:
docker network create hadoop
4.创建目录hadoop-cluster-docker,并在该目录下,写入hadoop的配置相关:
相关数据上传到git中,详情:https://github.com/mayunzhen/hadoop2.8.5-cluster-docker
5.拉取git项目:
git clone https://github.com/mayunzhen/hadoop2.8.5-cluster-docker.git
cd hadoop2.8.5-cluster-docker
./build-image.sh
6.启动集群:
随意哪个宿主机上随意启动(tips:weave固定的IP必须在同一个网段中,以便同一网段的容器能通信):
docker run -itd -h hadoop-master --name hadoop-master --net=hadoop -v /etc/localtime:/etc/localtime:ro -p 50070:50070 -p 8088:8088 -p 9000:9000 iammayunzhen/hadoop:2.8.5
weave attach 192.168.1.10/24 hadoop-master
docker run -itd -h hadoop-slave1 --name hadoop-slave1 --net=hadoop -v /etc/localtime:/etc/localtime:ro centos_hdp:2.8.5
weave attach 192.168.1.11/24 hadoop-slave1
docker run -itd -h hadoop-slave2 --name hadoop-slave2 --net=hadoop -v /etc/localtime:/etc/localtime:ro centos_hdp:2.8.5
weave attach 192.168.1.12/24 hadoop-slave2
7.查看集群状态
。。。