最大的教训是:遇到问题先看一下视频,是不是跟自己操作的不一样,如果没发现问题,再去网上查询。
1、对master节点:系统初始化,包括修改主机名,配置yum源,安装依赖包,设置防火墙,关闭selinux,调整内核参数,升级内核等。
2、对master节点:部署K8s,包括配置kube-proxy,安装docker,配置docker镜像源,安装kubeadm,配置各个虚拟机的静态ip,
3、把master节点拷贝为node1和node2
初始化主节点,加入主节点以及其余节点,部署网络
其中遇到的问题是:初始化kubeadm
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
1、kubeadm在k8s.gcr.io网站中,但是网络长城拦截无法访问。
2、因此配置docker的代理,因为docker实际是demon进程在拉取,所以要配置docker的全局代理,之后再重启docker
#!/bin/bash cat >> /etc/systemd/system/docker.service.d/http-proxy.conf <<EOF [Service] Environment="HTTP_PROXY=http://192.168.1.8:1082" Environment="HTTPS_PROXY=http://192.168.1.8:1082" EOF systemctl daemon-reload systemctl restart docker
3、最后,即使代理配置成功了,但是网速太慢,还是从其他地方拷贝的镜像,再load进docker里面
我的代码如下:
#!/bin/bash cd /root/kubeadm-basic.images for i in $( ls /root/kubeadm-basic.images ) do docker load -i $i done
4、初始化错误
此时需要删除镜像,参考:https://blog.csdn.net/lansye/article/details/79984077
#重置kubeadm
kubeadm reset
# 停止相关服务
systemctl stop docker kubelet etcd
#删除所有容器
docker rm -f $(sudo docker ps -qa)
5、初始化错误解决:查看日志
[root@k8s-master01 ~]# journalctl -xeu kubelet 6月 26 10:47:00 k8s-master01 kubelet[29773]: E0626 10:47:00.237357 29773 kubelet.go:2169] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:d 6月 26 10:47:02 k8s-master01 kubelet[29773]: W0626 10:47:02.862214 29773 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
根据google搜索,是flannel没配置好。
参考:https://github.com/kubernetes/kubeadm/issues/292
https://blog.csdn.net/qq_34857250/article/details/82562514
根据网上的解决方案,没有解决,遇到问题如下:
[root@k8s-master01 ~]# kubectl apply -f kube-flannel.yaml unable to recognize "kube-flannel.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused unable to recognize "kube-flannel.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
继续查找到这个帖子https://github.com/kubernetes/kubeadm/issues/1031
https://github.com/kubernetes/website/pull/16575/files
没法现问题。
最后升级为1.18.4,翻墙翻不过去,无法安装,降级会1.15.1时,发现了 kubernetes-cni-0.8.6这个依赖。
感觉这个依赖的版本可能有问题,查找原视频,发现视频里当时依赖的是kubernetes-cni-0.7.5,于是改为如下:
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 kubernetes-cni-0.7.5
启动成功,打印如下:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:494d4e8d714a490d2dd160686b023fed4766b44289e30f1c5b2b20dde21a85bb
按照提示执行命令如下:
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# export KUBECONFIG=$HOME/.kube/config
添加网络遇到错误:
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
解决如下:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml
验证如下:
[root@k8s-master01 ~]# kubectl create -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created [root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-czm6h 1/1 Running 0 5h19m coredns-5c98db65d4-wnwx5 1/1 Running 0 5h19m etcd-k8s-master01 1/1 Running 0 5h18m kube-apiserver-k8s-master01 1/1 Running 0 5h18m kube-controller-manager-k8s-master01 1/1 Running 0 5h18m kube-flannel-ds-amd64-b5xmn 1/1 Running 0 9m26s kube-proxy-xg8ch 1/1 Running 0 5h19m kube-scheduler-k8s-master01 1/1 Running 0 5h18m [root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 5h20m v1.15.1
在两个node节点分别执行
kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:494d4e8d714a490d2dd160686b023fed4766b44289e30f1c5b2b20dde21a85bb
[root@k8s-node01 ~]# kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef > --discovery-token-ca-cert-hash sha256:494d4e8d714a490d2dd160686b023fed4766b44289e30f1c5b2b20dde21a85bb [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 8h v1.15.1 k8s-node01 Ready <none> 120m v1.15.1 k8s-node02 Ready <none> 120m v1.15.1