kubeadm安装kubernetes集群:
kubernetes
是:master/node 架构
master上运行:API server ,Scheduler,Controller-Manager etcd
node上运行:kebelet,容器引擎(例docker) ,kube-proxy ,flannel(网路)
Pod:
自主式Pod
控制器管理的Pod
kubernetes安装方法:
1)传统的方式安装,让k8s自己的组件运行为系统级的守护进程,这种方式安装复杂,需要好多ca的证书,配置文件需要手动解决,相当复杂。
2)简单部署方法,依靠kubeadm官方提供的安装工具
注意:每个节点包括master都要安装docker,kubelet,并运行
实现自我管理的方式,把k8s自己也部署为pod。而master也需要安装flannel,来连接node节点。
kubeadm 安装kubernetes
环境:
master: etcd: 192.168.44.165
node01: 192.168.44.166
node02: 192.168.44.167
前提:
1.基于主机名通信:/etc/hosts
2.时间同步
3.关闭firewalld和iptables,selinux
OS:centos 7
安装步骤:
1.etcd cluster,仅master节点
2.flannel,集群所有节点包括master节点
3.配置k8s的master:仅master节点
kubernetes-master
启动的服务:
kube-apiserver,kube-scheduler,kube-controller-manager
4.配置k8s的各node节点
kubernetes-node
先设定启动docker服务
启动k8s的服务:
kube-proxy, kubelet
kubeadm(安装步骤)
1.master,node:安装kubelet,kubeadm,docker
2.master:kubeadm init (生成ca证书等)
3.个节点nodes:kubeadm join (加入集群
开始安装:
修改主机名
192.168.44.177 ---> master
192.168.44.178 ---> node01
192.168.44.176 ---> node02
修改host文件各个节点:
vim /etc/hosts
192.168.44.177 master
192.168.44.178 node01
192.168.44.176 node02
注:要关闭防火墙和selinux及同步时间
selinux 永久关闭:sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
所有节点都安装一下:
阿里云的镜像
docker的yum源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
编辑kubernetes的yum仓库,也是阿里的镜像
vim kubernetes.repo
[kubernetes]
name=kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1 #检查
pgpkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg #检查的key
enabled=1
然后保存退出执行
yum repolist
安装ssh密钥登陆
在master执行:
ssh-keygen -t rsa#一路回车
ssh-copy-id node01 (node02) #将密钥复制到各个node节点
安装
yum install docker-ce kubelet kubeadm kubectl -y
如不能连接docker仓库,可以配置docker代理例www.ik8s.io 是可以的配置如下:
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"#配置的代理
Environment="NO_PROXY=127.0.0.1/8,192.168.44.0/24"#配置不让自己代理
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
执行:
systemctl daemon-reload
启动docker
systemctl start docker
systemctl status docker
修改这两个文件,确保是1 可以先cat查看。(因为docker会生成大量的iptables规则,有可能对iptables内部的call和ip6有影响,所以要打开桥接功能)
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
查看安装的软件:
rpm -ql kubelet
/etc/kubernetes/manifests #清单目录
/etc/sysconfig/kubelet #配置文件
/etc/systemd/system/kubelet.service #
/usr/bin/kubelet #注程序
注:早期的k8s是必须关闭swap的交换分区(关闭命令:swapoff -a 开启命令:sqapon -a,永久关闭需要:修改/etc/fstab中内容,将swap哪一行用#注释掉)
初始化master:(在master执行执行)
只需要将其设置成开机自启就可以不用专门启动
systemctl enable docker
systemctl enable kubelet
如不关闭swap可以配置忽略swap和指定ipvs模式
支持lvs模型
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" #关闭swap
KUBE_PROXY_MODE=ipvs #支持ipvs
然后安装ipvs模块
ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4
如没有ipvs模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
初始化安装就支持ipvs
初始化:
kubeadm version #查看版本
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
--kubernetes-version #指定k8s版本
--pod-network-cid #指定pod网路
--service-cidr #指定service的我网路
--ignore-preflight-errors #忽略swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
报错是因为不能连接k8s.gcr.io地址可以下载好上传上去,我这里写了个脚本doker pull
vim image.sh
#!/bin/bash
echo ""
echo "Pulling Docker Images from mirrorgooglecontainers..."
echo "==>kube-apiserver:"
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
echo "==>kube-controller-manager:"
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
echo "==>kube-scheduler:"
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
echo "==>kube-proxy:"
docker pull mirrorgooglecontainers/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
echo "==>etcd:"
docker pull mirrorgooglecontainers/etcd:3.2.24
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
echo "==>pause:"
docker pull mirrorgooglecontainers/pause:3.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
echo "==>coredns:"
docker pull coredns/coredns
docker tag coredns/coredns k8s.gcr.io/coredns:1.2.6
echo "==>docker rmi mirrorgooglecontainers..."
docker rmi coredns/coredns
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.3
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.3
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.3
查看pull下来的镜像:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.13.3 fe242e556a99 3 weeks ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.3 0482f6400933 3 weeks ago 146MB
k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 3 weeks ago 80.3MB
k8s.gcr.io/kube-scheduler v1.13.3 3a6f709e97a0 3 weeks ago 79.6MB
coredns/coredns latest eb516548c180 6 weeks ago 40.3MB
k8s.gcr.io/coredns 1.2.6 eb516548c180 6 weeks ago 40.3MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB
在执行初始化命令:
kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
可以加参数 --apiserver-advertise-address=master——ip 指定master的ip地址
出现一下说明初始化成功:
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully! #master已经初始化成功后
To start using your cluster, you need to run the following as a regular user: #为了启动一个集群,最好是用一个普通用户需要执行一下命令(我这为了方便就直接用root用户执行了),
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root: #有root用户在其他node节点执行一下命令将其加入到kuberntes集群(注:token 和--discovery-token-ca-cert-hash的参数要记住方便以后有节点加入集群)
kubeadm join 192.168.44.177:6443 --token p50b8j.9ot6yuxc11zvrwcs --discovery-token-ca-cert-hash sha256:a2ceffc9a67763cb98ca7fd23fc2d93ea5370a9007620214d5e098b1874ba75b
执行初始化侯的命令
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
用kubectl查看汲取状态
[root@master ~]# kubectl get componentstatus(或kubectl get cs)
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
查看节点命令:kubectl get nodes
查看命名空间
[root@master ~]# kubectl get ns (看到有三个命名空间其中系统及的pod在kube-system里)
NAME STATUS AGE
default Active 38m
kube-public Active 38m
kube-system Active 38m
安装flannel 网址:https://github.com/coreos/flannel执行一下命令:
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
查看flanel是否运行(也可以docker images 查看你flanel的镜像是否下载成功)
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-hq6db 1/1 Running 0 40m
coredns-86c58d9df4-p7xgr 1/1 Running 0 40m
etcd-master 1/1 Running 0 39m
kube-apiserver-master 1/1 Running 0 39m
kube-controller-manager-master 1/1 Running 0 39m
kube-flannel-ds-amd64-6fqw6 1/1 Running 0 5m6s
kube-proxy-mwqsz 1/1 Running 0 40m
kube-scheduler-master 1/1 Running 0 39m
node节点操作:
首先关闭防火前和selinux
安装:
docker的yum源
编辑kubernetes的yum仓库,也是阿里的镜像
可以将mster上的yum源和docker获取镜像的脚本复制到给个节点:
scp docker_image.sh node02:/root
scp docker_image.sh node01:/root
scp docker-ce.repo kubernetes.repo node01:/etc/yum.repos.d/
scp docker-ce.repo kubernetes.repo node02:/etc/yum.repos.d/
执行:yum repolist
安装相关命令
yum install docker-ce kubelet kubeadm kubectl -y (node节点可以不安装kubelet)
启动docker将其设置成开机自启就kubelet可以不用专门启动
systemctl start docker
systemctl enable docker
systemctl enable kubelet
执行:
sh docker_image.sh (来获取镜像)
如不关闭swap可以配置忽略swap
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
将节点加入到集群(执行master初始化侯的命令如:没有关闭swap 就要加入--ignore-preflight-errors=Swap参数)
kubeadm join 192.168.44.177:6443 --token p50b8j.9ot6yuxc11zvrwcs --discovery-token-ca-cert-hash sha256:a2ceffc9a67763cb98ca7fd23fc2d93ea5370a9007620214d5e098b1874ba75b --ignore-preflight-errors=Swap
注:如不能下载flannel 可将master上的镜像导入到node节点上
各个几点执行完以上命令并加入到集群
master查看nodes
[root@master yum.repos.d]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 79m v1.13.3
node01 Ready <none> 15m v1.13.3
node02 Ready <none> 106s v1.13.3
master查看kube-prox和falene的运行情况
[root@master yum.repos.d]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-86c58d9df4-hq6db 1/1 Running 0 78m 10.244.0.3 master <none> <none>
coredns-86c58d9df4-p7xgr 1/1 Running 0 78m 10.244.0.2 master <none> <none>
etcd-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
kube-apiserver-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
kube-controller-manager-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
kube-flannel-ds-amd64-6fqw6 1/1 Running 0 43m 192.168.44.177 master <none> <none>
kube-flannel-ds-amd64-hn29s 1/1 Running 0 14m 192.168.44.178 node01 <none> <none>
kube-flannel-ds-amd64-tds62 1/1 Running 0 56s 192.168.44.176 node02 <none> <none>
kube-proxy-4ppd9 1/1 Running 0 56s 192.168.44.176 node02 <none> <none>
kube-proxy-bpjm8 1/1 Running 0 14m 192.168.44.178 node01 <none> <none>
kube-proxy-mwqsz 1/1 Running 0 78m 192.168.44.177 master <none> <none>
kube-scheduler-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
node节点执行命令查看镜像的运行情况
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.13.3 fe242e556a99 4 weeks ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.3 0482f6400933 4 weeks ago 146MB
k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 4 weeks ago 80.3MB
k8s.gcr.io/kube-scheduler v1.13.3 3a6f709e97a0 4 weeks ago 79.6MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 4 weeks ago 52.6MB
k8s.gcr.io/coredns 1.2.6 eb516548c180 6 weeks ago 40.3MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB
到此k8s集群安装完成