一、kubernetes组件:
master节点 | Minion 节点是实际运行 Docker 容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能 |
minion节点 | Master 节点负责对外提供一系列管理集群的 API 接口,并且通过和 Minion 节点交互来实现对集群的操作管理 |
etcd | key-value键值存储数据库,用来存储kubernetes的信息的。 |
apiserver | 用户和 kubernetes 集群交互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过 etcd 来实现持久化并维护对象的一致性。 |
scheduler | 负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。 |
controller-manager | 主要是用于保证 replicationController 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。 |
kubelet | 运行在 minion 节点,负责和节点上的 Docker 交互,例如启停容器,监控运行状态等 |
proxy | 运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去 |
flannel | Flannel 的目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信 |
二、主机配置信息:
角色 | 系统环境 | IP | 相关组件 |
master | CentOS- 7 | 192.168.10.5 | docker、etcd、api-server、scheduler、controller-manager、flannel、harbor |
node-1 | CentOS-7 | 192.168.10.8 | docker、etcd、kubelet、proxy、flannel |
node-2 | CentOS-7 | 192.168.10.9 | docker、etcd、kubelet、proxy、flannel |
三、安装部署配置kubernets集群
master主机操作:
1、安装etcd
yum安装
[root@master ~]# yum -y install etcd
[root@master ~]# rpm -ql etcd /etc/etcd /etc/etcd/etcd.conf /usr/bin/etcd /usr/bin/etcdctl /usr/lib/systemd/system/etcd.service
修改配置:
[root@master ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_NAME="default" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.5:2379"
启动服务器查看端口:
[root@master ~]# systemctl start etcd [root@master ~]# netstat -tunlp|grep etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 20919/etcd tcp6 0 0 :::2379 :::* LISTEN 20919/etcd
设置etcd网络:
etcd集群(高可用)详见:https://www.cnblogs.com/51wansheng/p/10234036.html
2、配置api-server服务
在master主机上,只安装了master包
[root@master kubernetes]# yum -y install kubernetes-master [root@master kubernetes]# tree /etc/kubernetes/ /etc/kubernetes/ ├── apiserver ├── config ├── controller-manager └── scheduler 0 directories, 4 files
修改apiserver配置:
[root@master kubernetes]# egrep -v "^#|^$" /etc/kubernetes/apiserver #API服务监听地址
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #API服务监听端口
KUBE_API_PORT="--port=8080"
#Etcd服务地址 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.10.5:2379" 配置DNS在一个区间
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" #对api请求控制 AlwaysAdmit 不做限制,允许所有结点访问apiserver
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit" KUBE_API_ARGS=""
启动服务:
[root@master kubernetes]# systemctl start kube-apiserver [root@master kubernetes]# netstat -tunlp|grep apiserve tcp6 0 0 :::6443 :::* LISTEN 21042/kube-apiserve tcp6 0 0 :::8080 :::* LISTEN 21042/kube-apiserve
3、配置scheduler服务
[root@master kubernetes]# egrep -v "^#|^$" /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="" KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=192.168.10.5:8080" KUBE_LEADER_ELECT="--leader-elect"
启动服务:
[root@master kubernetes]# systemctl start kube-scheduler [root@master kubernetes]# netstat -tunlp|grep kube-sche tcp6 0 0 :::10251 :::* LISTEN 21078/kube-schedule [root@master kubernetes]# systemctl status kube-scheduler ● kube-scheduler.service - Kubernetes Scheduler Plugin Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 19:20:50 CST; 5s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 21078 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─21078 /usr/bin/kube-scheduler --logtostderr=true --v=4 --master=192.168.10.5:8080
4、配置controller-manager服务
[root@master kubernetes]# egrep -v "^#|^$" /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="" KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=192.168.10.5:8080"
启动服务:
[root@master kubernetes]# systemctl start kube-controller-manager [root@master kubernetes]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 19:26:38 CST; 8s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 21104 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─21104 /usr/bin/kube-controller-manager --logtostderr=true --v=4 --master=192.168.10.5:8080 [root@master kubernetes]# netstat -tunlp|grep kube-controll tcp6 0 0 :::10252 :::* LISTEN 21104/kube-controll
以上是master配置:
[root@master kubernetes]# netstat -tunlp|grep etc tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 20919/etcd tcp6 0 0 :::2379 :::* LISTEN 20919/etcd [root@master kubernetes]# netstat -tunlp|grep kube tcp6 0 0 :::10251 :::* LISTEN 21078/kube-schedule tcp6 0 0 :::6443 :::* LISTEN 21042/kube-apiserve tcp6 0 0 :::10252 :::* LISTEN 21104/kube-controll tcp6 0 0 :::8080 :::* LISTEN 21042/kube-apiserve
node节点服务器操作
在node-1和node-2执行以下操作:
[root@node-1 kubernetes]# yum -y install kubernetes-node
5、修改kubelet配置:
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/kubernetes/config # 启用日志标准错误
KUBE_LOGTOSTDERR="--logtostderr=true" # 日志级别
KUBE_LOG_LEVEL="--v=0" #允许容器请求特权模式,默认false
KUBE_ALLOW_PRIV="--allow-privileged=false"
#指定master节点 KUBE_MASTER="--master=http://192.168.10.5:8080"
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/kubernetes/kubelet # Kubelet监听IP地址
KUBELET_ADDRESS="--address=0.0.0.0"
# Kubelet监听服务端口
KUBELET_PORT="--port=10250"
#配置kubelet主机 KUBELET_HOSTNAME="--hostname-override=192.168.10.8" #配置apiserver 服务地址
KUBELET_API_SERVER="--api-servers=http://192.168.10.5:8080" #默认获取容器镜像地址
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_ARGS=""
启动kebelet服务:
[root@node-1 kubernetes]# systemctl start kubelet [root@node-1 kubernetes]# netstat -tunlp|grep kube tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 23883/kubelet tcp6 0 0 :::10250 :::* LISTEN 23883/kubelet tcp6 0 0 :::10255 :::* LISTEN 23883/kubelet tcp6 0 0 :::4194 :::* LISTEN 23883/kubelet
[root@node-1 kubernetes]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 20:58:55 CST; 37min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 23883 (kubelet) Memory: 31.6M CGroup: /system.slice/kubelet.service ├─23883 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.10.5:8080 --address=0.0.0.0 --port=10250 --hostname-override=192.168.10.8 --allow-... └─23927 journalctl -k -f
6、修改proxy配置:
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/kubernetes/proxy KUBE_PROXY_ARGS="" NODE_HOSTNAME="--hostname-override=192.168.10.8"
启动proxy服务
[root@node-1 kubernetes]# systemctl start kube-proxy
[root@node-1 kubernetes]# netstat -tunlp|grep kube-pro
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 24544/kube-proxy
[root@node-1 kubernetes]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 21:48:41 CST; 5s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 24544 (kube-proxy) Memory: 12.0M CGroup: /system.slice/kube-proxy.service └─24544 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.10.5:8080
查看集群节点:
[root@master kubernetes]# kubectl get node NAME STATUS AGE 192.168.10.8 Ready 59m 192.168.10.9 Ready 51m [root@master kubernetes]# kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health": "true"} controller-manager Healthy ok scheduler Healthy ok
四、安装配置flanneld服务
在三台执行(docker0网卡需要通过flannel获取IP地址):
安装flannel:
[root@master kubernetes]# yum -y install flannel
flannel的配置文件及二进制文件
[root@master kubernetes]# rpm -ql flannel /etc/sysconfig/flanneld /run/flannel /usr/bin/flanneld /usr/bin/flanneld-start /usr/lib/systemd/system/docker.service.d/flannel.conf /usr/lib/systemd/system/flanneld.service
在etcd服务上面设置etcd网络
#创建一个目录/ k8s/network用于存储flannel网络信息
[root@master kubernetes]# etcdctl mkdir /k8s/network
#给/k8s/network/config 赋一个字符串的值 '{"Network": "10.255.0.0/16"}'
[root@master kubernetes]# etcdctl set /k8s/network/config '{"Network": "10.255.0.0/16"}' {"Network": "10.255.0.0/16"}
#查看设置的值
[root@master kubernetes]# etcdctl get /k8s/network/config {"Network": "10.255.0.0/16"}
说明:
flannel启动过程解析:
(1)、从etcd中获取network的配置信息
(2)、划分subnet,并在etcd中进行注册
(3)、将子网信息记录到/run/flannel/subnet.env中
修改flanneld配置文件:
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/sysconfig/flanneld
#指定ETCD服务地址
FLANNEL_ETCD_ENDPOINTS="http://192.168.10.5:2379" #注其中/k8s/network与上面etcd中的Network
FLANNEL_ETCD_PREFIX="/k8s/network" #指定物理网卡
FLANNEL_OPTIONS="--iface=ens33"
启动flanneld服务:
[root@master kubernetes]# systemctl start flanneld [root@master kubernetes]# netstat -tunlp|grep flan udp 0 0 192.168.10.5:8285 0.0.0.0:* 21369/flanneld
查看flanneld子网信息/run/flannel/subnet.env (flannel运行)
[root@master kubernetes]# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.255.0.0/16 FLANNEL_SUBNET=10.255.75.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=false
之后将会有一个脚本将subnet.env转写成一个docker的环境变量文件/run/flannel/docker。docker0的地址是由 /run/flannel/subnet.env 的 FLANNEL_SUBNET 参数决定的
[root@master kubernetes]# cat /run/flannel/docker DOCKER_OPT_BIP="--bip=10.255.75.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --bip=10.255.75.1/24 --ip-masq=true --mtu=1472"
查看flannel网卡信息:
注:在启动flannel之前,需要在etcd中添加一条网络配置记录,这个配置将用于flannel分配给每个docker的虚拟IP地址段。#设置在minion上docker的IP地址.
由于flannel将覆盖docker0网桥,所以flannel服务要先于docker服务启动。如果docker服务已经启动,则先停止docker服务,然后启动flannel,再启动docker
[root@master kubernetes]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.255.75.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:fe:16:d0:a1 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.5 netmask 255.255.255.0 broadcast 192.168.10.255 inet6 fe80::a6a4:698e:10d6:69cf prefixlen 64 scopeid 0x20<link> ether 00:0c:29:53:a7:50 txqueuelen 1000 (Ethernet) RX packets 162601 bytes 211261546 (201.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 28856 bytes 10747031 (10.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.255.75.0 netmask 255.255.0.0 destination 10.255.75.0 inet6 fe80::bf21:8888:cfcc:f153 prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 144 (144.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 472702 bytes 193496275 (184.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 472702 bytes 193496275 (184.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
可以看到docker0网卡通过flannel获取到了子网IP地址
五、测试集群容器网络连通性:
在任意两台节点上启动一个容器:
在节点node-1上运行一个容器:
[root@node-1 kubernetes]# docker pull ubuntu #启动一个容器 [root@node-1 ~]# docker run -it --name test-docker ubuntu #查看容器IP地址 [root@node-1 kubernetes]# docker inspect test-docker|grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "10.255.18.2", "IPAddress": "10.255.18.2",
在节点master上运行一个容器:
[root@master kubernetes]# docker pull ubuntu [root@master kubernetes]# docker run -it --name test-docker ubuntu [root@master ~]# docker inspect test-docker|grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "10.255.75.2", "IPAddress": "10.255.75.2",
在两台上面安装ping命令:
测试网络连通执行前:
iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F iptables -L -n
在master节点上的容器ping:
root@38165ea9582b:~# ping 192.168.10.8 node-1节点物理网卡 PING 192.168.10.8 (192.168.10.8): 56 data bytes 64 bytes from 192.168.10.8: icmp_seq=0 ttl=63 time=1.573 ms 64 bytes from 192.168.10.8: icmp_seq=1 ttl=63 time=0.553 ms ^C--- 192.168.10.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.553/1.063/1.573/0.510 ms root@38165ea9582b:~# ping 10.255.18.2 # node-1 容器运行的IP地址 PING 10.255.18.2 (10.255.18.2): 56 data bytes 64 bytes from 10.255.18.2: icmp_seq=0 ttl=60 time=1.120 ms 64 bytes from 10.255.18.2: icmp_seq=1 ttl=60 time=1.264 ms ^C--- 10.255.18.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 1.120/1.192/1.264/0.072 ms root@38165ea9582b:~# ping 10.255.18.1 # node-1节点 docker0网卡 PING 10.255.18.1 (10.255.18.1): 56 data bytes 64 bytes from 10.255.18.1: icmp_seq=0 ttl=61 time=1.364 ms 64 bytes from 10.255.18.1: icmp_seq=1 ttl=61 time=0.741 ms ^C--- 10.255.18.1 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.741/1.052/1.364/0.312 ms root@38165ea9582b:~# ping 10.255.18.0 # node-1 节点 flannel网卡地址 PING 10.255.18.0 (10.255.18.0): 56 data bytes 64 bytes from 10.255.18.0: icmp_seq=0 ttl=61 time=1.666 ms 64 bytes from 10.255.18.0: icmp_seq=1 ttl=61 time=0.804 ms ^C--- 10.255.18.0 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.804/1.235/1.666/0.431 ms
至此K8S基础环境安装完毕,总结:
master节点运行的服务及端口:
[root@master ~]# netstat -tunlp|grep "etcd" tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 20919/etcd tcp6 0 0 :::2379 :::* LISTEN 20919/etcd [root@master ~]# netstat -tunlp|grep "kube" tcp6 0 0 :::10251 :::* LISTEN 21078/kube-schedule tcp6 0 0 :::6443 :::* LISTEN 21042/kube-apiserve tcp6 0 0 :::10252 :::* LISTEN 21104/kube-controll tcp6 0 0 :::8080 :::* LISTEN 21042/kube-apiserve [root@master ~]# netstat -tunlp|grep "flannel" udp 0 0 192.168.10.5:8285 0.0.0.0:* 21369/flanneld
node节点运行的服务及端口:
[root@node-1 ~]# netstat -tunlp|grep "kube" tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 69207/kubelet tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 24544/kube-proxy tcp6 0 0 :::10250 :::* LISTEN 69207/kubelet tcp6 0 0 :::10255 :::* LISTEN 69207/kubelet tcp6 0 0 :::4194 :::* LISTEN 69207/kubelet [root@node-1 ~]# netstat -tunlp|grep "flannel" udp 0 0 192.168.10.8:8285 0.0.0.0:* 47254/flanneld