k8s二进制部署03
4 部署mater节点 kube-apiserver服务
下载页面: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md
下载地址:
https://dl.k8s.io/v1.15.5/kubernetes-server-linux-amd64.tar.gz
https://dl.k8s.io/v1.15.5/kubernetes-client-linux-amd64.tar.gz
https://dl.k8s.io/v1.15.5/kubernetes-node-linux-amd64.tar.gz
4.1 签发client端证书
证书签发都在0.210上操作
此证书的用途是apiserver和etcd之间通信所用
4.1.1 创建生成证书csr的json配置文件
# vim /opt/certs/client-csr.json { "CN": "k8s-node", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "zq", "OU": "ops" } ] }
4.1.2 生成client证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client [root@server05 certs]# ll|grep client -rw-r--r-- 1 root root 993 Apr 20 21:30 client.csr -rw-r--r-- 1 root root 280 Apr 20 21:30 client-csr.json -rw------- 1 root root 1675 Apr 20 21:30 client-key.pem -rw-r--r-- 1 root root 1359 Apr 20 21:30 client.pem
4.2 签发kube-apiserver证书
此证书的用途是apiserver对外提供的服务的证书
4.2.1 创建生成证书csr的json配置文件
此配置中的hosts包含所有可能会部署apiserver的列表
其中10.11.0.218是反向代理的vip地址
# vim /opt/certs/apiserver-csr.json { "CN": "k8s-apiserver", "hosts": [ "127.0.0.1", "192.168.0.1", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "10.11.0.218", "10.11.0.207", "10.11.0.208", "10.11.0.209" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "chinasoft", "OU": "ops" } ] }
4.2.2 生成kube-apiserver证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver [root@server05 certs]# ll|grep apiserver -rw-r--r-- 1 root root 1249 Apr 20 21:31 apiserver.csr -rw-r--r-- 1 root root 566 Apr 20 21:31 apiserver-csr.json -rw------- 1 root root 1675 Apr 20 21:31 apiserver-key.pem -rw-r--r-- 1 root root 1590 Apr 20 21:31 apiserver.pem
4.3 下载安装kube-apiserver
以0.207为例
# 上传并解压缩
tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /usr/local
cd /usr/local
mv kubernetes/ kubernetes-v1.15.2
ln -s /usr/local/kubernetes-v1.15.2/ /usr/local/kubernetes
# 清理源码包和docker镜像
cd /usr/local/kubernetes
rm -rf kubernetes-src.tar.gz
cd server/bin
rm -f *.tar
rm -f *_tag
# 创建命令软连接到系统环境变量下
ln -s /usr/local/kubernetes/server/bin/kubectl /usr/bin/kubectl
4.4 部署apiserver服务
4.4.1 拷贝证书文件
拷贝证书文件到/usr/local/kubernetes/server/bin/cert目录下
# 创建目录
mkdir -p /usr/local/kubernetes/server/bin/cert
cd /usr/local/kubernetes/server/bin/cert
# 拷贝三套证书
scp server05:/opt/certs/ca.pem . scp server05:/opt/certs/ca-key.pem . scp server05:/opt/certs/client.pem . scp server05:/opt/certs/client-key.pem . scp server05:/opt/certs/apiserver.pem . scp server05:/opt/certs/apiserver-key.pem .
4.4.2 创建audit配置
audit日志审计规则配置是k8s要求必须要有得配置,可以不理解,直接用
mkdir /usr/local/kubernetes/server/conf
# vim /usr/local/kubernetes/server/conf/audit.yaml apiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived"
4.4.3 创建apiserver启动脚本
# vim /usr/local/kubernetes/server/bin/kube-apiserver.sh #!/bin/bash ./kube-apiserver --apiserver-count 2 --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log --audit-policy-file ../conf/audit.yaml --authorization-mode RBAC --client-ca-file ./cert/ca.pem --requestheader-client-ca-file ./cert/ca.pem --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --etcd-cafile ./cert/ca.pem --etcd-certfile ./cert/client.pem --etcd-keyfile ./cert/client-key.pem --etcd-servers https://10.11.0.207:2379,https://10.11.0.208:2379,https://10.11.0.209:2379 --service-account-key-file ./cert/ca-key.pem --service-cluster-ip-range 192.168.0.0/16 --service-node-port-range 3000-29999 --target-ram-mb=1024 --kubelet-client-certificate ./cert/client.pem --kubelet-client-key ./cert/client-key.pem --log-dir /data/logs/kubernetes/kube-apiserver --tls-cert-file ./cert/apiserver.pem --tls-private-key-file ./cert/apiserver-key.pem --v 2
# 授权
chmod +x /usr/local/kubernetes/server/bin/kube-apiserver.sh
4.4.4 创建supervisor启动apiserver的配置
安装supervisor软件
yum install supervisor -y systemctl start supervisord systemctl enable supervisord # vim /etc/supervisord.d/kube-apiserver.ini [program:kube-apiserver] ; 显示的程序名,类似my.cnf,可以有多个 command=sh /usr/local/kubernetes/server/bin/kube-apiserver.sh numprocs=1 ; 启动进程数 (def 1) directory=/usr/local/kubernetes/server/bin autostart=true ; 是否自启 (default: true) autorestart=true ; 是否自动重启 (default: true) startsecs=30 ; 服务运行多久判断为成功(def. 1) startretries=3 ; 启动重试次数 (default 3) exitcodes=0,2 ; 退出状态码 (default 0,2) stopsignal=QUIT ; 退出信号 (default TERM) stopwaitsecs=10 ; 退出延迟时间 (default 10) user=root ; 运行用户 redirect_stderr=true ; 重定向错误输出到标准输出(def false) stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB) stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10) stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0) ;子进程还有子进程,需要添加这个参数,避免产生孤儿进程 killasgroup=true stopasgroup=true
4.4.5 启动apiserver服务并检查
mkdir -p /data/logs/kubernetes/kube-apiserver
supervisorctl update
supervisorctl status
netstat -nltup|grep kube-api
4.4.6 部署启动所有apiserver机器
集群其他机器的部署,没有不同的地方,所以略
4.5 部署controller-manager服务
apiserve、controller-manager、kube-scheduler三个服务所需的软件在同一套压缩包里面的,因此后两个服务不需要在单独解包
而且这三个服务是在同一个主机上,互相之间通过http://127.0.0.1,也不需要证书
4.5.1 创建controller-manager启动脚本
# vim /usr/local/kubernetes/server/bin/kube-controller-manager.sh #!/bin/sh ./kube-controller-manager --cluster-cidr 172.7.0.0/16 --leader-elect true --log-dir /data/logs/kubernetes/kube-controller-manager --master http://127.0.0.1:8080 --service-account-private-key-file ./cert/ca-key.pem --service-cluster-ip-range 192.168.0.0/16 --root-ca-file ./cert/ca.pem --v 2
# 授权
chmod +x /usr/local/kubernetes/server/bin/kube-controller-manager.sh
4.5.2 创建supervisor配置
# vim /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager] ; 显示的程序名 command=sh /usr/local/kubernetes/server/bin/kube-controller-manager.sh numprocs=1 ; 启动进程数 (def 1) directory=/usr/local/kubernetes/server/bin autostart=true ; 是否自启 (default: true) autorestart=true ; 是否自动重启 (default: true) startsecs=30 ; 服务运行多久判断为成功(def. 1) startretries=3 ; 启动重试次数 (default 3) exitcodes=0,2 ; 退出状态码 (default 0,2) stopsignal=QUIT ; 退出信号 (default TERM) stopwaitsecs=10 ; 退出延迟时间 (default 10) user=root ; 运行用户 redirect_stderr=true ; 重定向错误输出到标准输出(def false) stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB) stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10) stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0) ;子进程还有子进程,需要添加这个参数,避免产生孤儿进程 killasgroup=true stopasgroup=true
4.5.3 启动服务并检查
mkdir -p /data/logs/kubernetes/kube-controller-manager
supervisorctl update
supervisorctl status
4.5.4 部署启动所有集群
没有不同的地方,所以略
4.6 部署kube-scheduler服务
4.6.1 创建启动脚本
# vim /usr/local/kubernetes/server/bin/kube-scheduler.sh #!/bin/sh ./kube-scheduler --leader-elect --log-dir /data/logs/kubernetes/kube-scheduler --master http://127.0.0.1:8080 --v 2
# 授权
chmod +x /usr/local/kubernetes/server/bin/kube-scheduler.sh
4.6.2 创建supervisor配置
# vim /etc/supervisord.d/kube-scheduler.ini [program:kube-scheduler] command=sh /usr/local/kubernetes/server/bin/kube-scheduler.sh numprocs=1 ; 启动进程数 (def 1) directory=/usr/local/kubernetes/server/bin autostart=true ; 是否自启 (default: true) autorestart=true ; 是否自动重启 (default: true) startsecs=30 ; 服务运行多久判断为成功(def. 1) startretries=3 ; 启动重试次数 (default 3) exitcodes=0,2 ; 退出状态码 (default 0,2) stopsignal=QUIT ; 退出信号 (default TERM) stopwaitsecs=10 ; 退出延迟时间 (default 10) user=root ; 运行用户 redirect_stderr=true ; 重定向错误输出到标准输出(def false) stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB) stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10) stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0) ;子进程还有子进程,需要添加这个参数,避免产生孤儿进程 killasgroup=true stopasgroup=true
4.6.3 启动服务并检查
mkdir -p /data/logs/kubernetes/kube-scheduler
supervisorctl update
supervisorctl status
4.6.4 部署启动所有集群
没有不同的地方,所以略
4.7 检查master节点部署情况
[root@server02 bin]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}
5 部署4层反代去代理apiserver
master节点上的3套服务部署完成后,需要使用反向代理去统一两个apiservser的对外端口
这里使用nginx+keepalived的高可用架构部署在0.206和7.12两台机器上
5.1 部署nginx四层反代
使用7443端口代理apiserver的6443端口,使用keepalived管理VIP 10.11.0.218
5.1.1 yum安装程序
yum install nginx keepalived -y
5.1.2 配置NGINX
四层代理不能写在默认的conf.d目录下,因为这个目录默认是数据http模块的include
所以要么把四层代理写到主配置文件最下面,要么模仿七层代理创建一个四层代理文件夹
# 1. 在nginx配置文件中增加四层代理配置文件夹
mkdir /etc/nginx/tcp.d/ echo 'include /etc/nginx/tcp.d/*.conf;' >>/etc/nginx/nginx.conf # 写入代理配置 # vim /etc/nginx/tcp.d/apiserver.conf stream { upstream kube-apiserver { server 10.11.0.206:6443 max_fails=3 fail_timeout=30s; server 10.11.0.207:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass kube-apiserver; } }
5.1.3 启动nginx
nginx -t
systemctl start nginx
systemctl enable nginx
5.2 配置keepalived
5.2.1 创建端口监测脚本
创建脚本
# vim etc/keepalived/check_port.sh #!/bin/bash #keepalived 监控端口脚本 #使用方法:等待keepalived传入端口参数,检查改端口是否存在并返回结果 CHK_PORT=$1 if [ -n "$CHK_PORT" ];then PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l` if [ $PORT_PROCESS -eq 0 ];then echo "Port $CHK_PORT Is Not Used,End." exit 1 fi else echo "Check Port Cant Be Empty!" fi
给与脚本执行权限 chmod +x /etc/keepalived/check_port.sh 5.2.2 创建keepalived主配置文件 主机定义为10.11.0.206,从机定义为10.11.0.207 注意:主配置文件添加了nopreempt参数,非抢占式,意味着VIP发生漂移后,主重新启动后也不会夺回VIP,目的是为了稳定性
# vim etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.11.0.206 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 251 priority 100 advert_int 1 mcast_src_ip 10.11.0.206 nopreempt authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.11.0.218 } }
5.2.3 创建keepalived从配置文件
# vim etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.11.0.207 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 251 mcast_src_ip 10.11.0.207 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.11.0.218 } }
5.3.4 启动keepalived并验证
systemctl start keepalived
systemctl enable keepalived
ip addr|grep '10.11.0.218'
6 部署node节点
6.1 签发kubelet证书
签发证书,都在0.210上
6.1.1 创建生成证书csr的json配置文件
cd /opt/certs/
# vim /opt/certs/kubelet-csr.json { "CN": "k8s-kubelet", "hosts": [ "127.0.0.1", "10.11.0.206", "10.11.0.208", "10.11.0.209", "10.11.0.210", "10.11.0.211", "10.11.0.212", "10.11.0.213", "10.11.0.214", "10.11.0.215", "10.11.0.216", "10.11.0.217", "10.11.0.218" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "zq", "OU": "ops" } ] }
6.1.2 生成kubelet证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet [root@server05certs]# ll |grep kubelet -rw-r--r-- 1 root root 1115 Apr 22 22:17 kubelet.csr -rw-r--r-- 1 root root 452 Apr 22 22:17 kubelet-csr.json -rw------- 1 root root 1679 Apr 22 22:17 kubelet-key.pem -rw-r--r-- 1 root root 1460 Apr 22 22:17 kubelet.pem
6.2 创建kubelet服务
6.2.1 拷贝证书至node节点
cd /usr/local/kubernetes/server/bin/cert
scp server05:/opt/certs/kubelet.pem .
scp server05:/opt/certs/kubelet-key.pem .
6.2.2 创建kubelet配置
创建kubelet的配置文件kubelet.kubeconfig比较麻烦,需要四步操作才能完成
(1) set-cluster(设置集群参数)
使用ca证书创建集群myk8s,使用的apiserver信息是10.11.0.218这个VIP
cd /usr/local/kubernetes/server/conf/ kubectl config set-cluster myk8s --certificate-authority=/usr/local/kubernetes/server/bin/cert/ca.pem --embed-certs=true --server=https://10.11.0.218:7443 --kubeconfig=kubelet.kubeconfig
(2) set-credentials(设置客户端认证参数)
使用client证书创建用户k8s-node
kubectl config set-credentials k8s-node --client-certificate=/usr/local/kubernetes/server/bin/cert/client.pem --client-key=/usr/local/kubernetes/server/bin/cert/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig
(3) set-context(绑定namespace)
创建myk8s-context,关联集群myk8s和用户k8s-node
kubectl config set-context myk8s-context --cluster=myk8s --user=k8s-node --kubeconfig=kubelet.kubeconfig
(4) use-context
使用生成的配置文件向apiserver注册,注册信息会写入etcd,所以只需要注册一次即可
kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
(5) 查看生成的kubelet.kubeconfig
[root@server02 conf]# cat kubelet.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: xxxxxxxx server: https://10.11.0.218:7443 name: myk8s contexts: - context: cluster: myk8s user: k8s-node name: myk8s-context current-context: myk8s-context kind: Config preferences: {} users: - name: k8s-node user: client-certificate-data: xxxxxxxx client-key-data: xxxxxxxx
可以看出来,这个配置文件里面包含了集群名字,用户名字,集群认证的公钥,用户的公私钥等
6.2.3 创建k8s-node.yaml配置文件
# vim /usr/local/kubernetes/server/conf/k8s-node.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node
使用RBAC鉴权规则,创建了一个 ClusterRoleBinding的资源
此资源中定义了一个user叫k8s-node
给k8s-node用户绑定了角色ClusterRole,角色名为system:node
使这个用户具有成为集群运算节点角色的权限
由于这个用户名,同时也是kubeconfig中指定的用户,
所以通过kubeconfig配置启动的kubelet节点,就能够成为node节点
6.2.4 应用资源配置
应用资源配置,并查看结果
# 应用资源配置
kubectl create -f /usr/local/kubernetes/server/conf/k8s-node.yaml
# 查看集群角色和角色属性
[root@server02 ~]# kubectl get clusterrolebinding k8s-node NAME AGE k8s-node 86d [root@server02 ~]# kubectl get clusterrolebinding k8s-node -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: "2020-08-24T12:37:52Z" name: k8s-node resourceVersion: "408496" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node uid: 4115a257-dc28-40d3-92e9-61dd60ae9dc3 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node
#此时只是创建了相应的资源,还没有具体的node,如下验证
[root@server02 conf]# kubectl get nodes
No resources found.
6.2.5 创建kubelet启动脚本
--hostname-override参数每个node节点都一样,是节点的主机名,注意修改
[root@server02 ~]# vim /usr/local/kubernetes/server/bin/kubelet.sh #!/bin/sh ./kubelet --hostname-override server02.host.com --anonymous-auth=false --cgroup-driver systemd --cluster-dns 192.168.0.2 --cluster-domain cluster.local --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --fail-swap-on="false" --client-ca-file ./cert/ca.pem --tls-cert-file ./cert/kubelet.pem --tls-private-key-file ./cert/kubelet-key.pem --image-gc-high-threshold 20 --image-gc-low-threshold 10 --kubeconfig ../conf/kubelet.kubeconfig --log-dir /data/logs/kubernetes/kube-kubelet --pod-infra-container-image harbor.chinasoft.com/public/pause:latest --root-dir /data/kubelet
# 创建目录&授权
chmod +x /usr/local/kubernetes/server/bin/kubelet.sh
mkdir -p /data/logs/kubernetes/kube-kubelet
mkdir -p /data/kubelet
6.2.6 创建supervisor配置
# vim etc/supervisord.d/kube-kubelet.ini [program:kube-kubelet] command=sh /usr/local/kubernetes/server/bin/kubelet.sh numprocs=1 ; 启动进程数 (def 1) directory=/usr/local/kubernetes/server/bin autostart=true ; 是否自启 (default: true) autorestart=true ; 是否自动重启 (default: true) startsecs=30 ; 服务运行多久判断为成功(def. 1) startretries=3 ; 启动重试次数 (default 3) exitcodes=0,2 ; 退出状态码 (default 0,2) stopsignal=QUIT ; 退出信号 (default TERM) stopwaitsecs=10 ; 退出延迟时间 (default 10) user=root ; 运行用户 redirect_stderr=true ; 重定向错误输出到标准输出(def false) stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB) stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10) stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0) ;子进程还有子进程,需要添加这个参数,避免产生孤儿进程 killasgroup=true stopasgroup=true
6.2.7 启动服务并检查
supervisorctl update
supervisorctl status
[root@server02 server]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
server02.host.com Ready <none> 65s v1.15.5
6.2.8 部署其他node节点
第一个节点部署完成后,其他节点就要简单很多,只需拷贝kubelet.kubeconfig配置到本地后,创建启动脚本并用`supervisord启动即可
也可以不拷贝配置文件,就需要手动再执行创建配置文件的四步
# 拷贝证书
cd /usr/local/kubernetes/server/bin/cert
scp server05:/opt/certs/kubelet.pem .
scp server05:/opt/certs/kubelet-key.pem .
# 拷贝配置文件
cd /usr/local/kubernetes/server/conf/
scp server02:/usr/local/kubernetes/server/conf/kubelet.kubeconfig .
拷贝完配置后,剩下的步骤参考6.2.5 创建kubelet启动脚本,除脚本中--hostname-override不同外,其他都一样
6.2.9 检查所有节点并给节点打上标签
此操作非必须,因为只是打的一个标签,方便识别而已
kubectl get nodes
kubectl label node server02.host.com node-role.kubernetes.io/master=
kubectl label node server02.host.com node-role.kubernetes.io/node=
[root@server03 cert]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
server02.host.com Ready master,node 9m v1.15.5
server03.host.com Ready <none> 64s v1.15.5
6.3 创建kube-proxy服务
签发证书在0.210上
6.3.1 签发kube-proxy证书
(1) 创建生成证书csr的json配置文件
cd /opt/certs/ [root@server05 ~]# cat /opt/certs/kube-proxy-csr.json { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "zq", "OU": "ops" } ] }
(2) 生成kube-proxy证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
(3) 检查生成的证书文件
[root@server05certs]# ll |grep proxy
-rw-r--r-- 1 root root 1005 Apr 22 22:54 kube-proxy-client.csr
-rw------- 1 root root 1675 Apr 22 22:54 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1371 Apr 22 22:54 kube-proxy-client.pem
-rw-r--r-- 1 root root 267 Apr 22 22:54 kube-proxy-csr.json
6.3.2 拷贝证书文件至各节点
cd /usr/local/kubernetes/server/bin/cert
scp server05:/opt/certs/kube-proxy-client.pem .
scp server05:/opt/certs/kube-proxy-client-key.pem .
6.3.3 创建kube-proxy配置
同样是四步操作,类似kubelet
(1) set-cluster
cd /usr/local/kubernetes/server/conf/ kubectl config set-cluster myk8s --certificate-authority=/usr/local/kubernetes/server/bin/cert/ca.pem --embed-certs=true --server=https://10.11.0.218:7443 --kubeconfig=kube-proxy.kubeconfig (2) set-credentials kubectl config set-credentials kube-proxy --client-certificate=/usr/local/kubernetes/server/bin/cert/kube-proxy-client.pem --client-key=/usr/local/kubernetes/server/bin/cert/kube-proxy-client-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig (3) set-context kubectl config set-context myk8s-context --cluster=myk8s --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig (4) use-context kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
6.3.4 加载ipvs模块以备kube-proxy启动用
# 创建开机ipvs脚本
# vim etc/ipvs.sh #!/bin/bash ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs" for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*") do /sbin/modinfo -F filename $i &>/dev/null if [ $? -eq 0 ];then /sbin/modprobe $i fi done
# 执行脚本开启ipvs
sh /etc/ipvs.sh
# 验证开启结果
[root@server02 conf]# lsmod |grep ip_vs
ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
......略
6.3.5 创建kube-proxy启动脚本
同上, --hostname-override参数在不同的node节点上不一样,需修改
# vim usr/local/kubernetes/server/bin/kube-proxy.sh #!/bin/sh ./kube-proxy --hostname-override server02.host.com --cluster-cidr 172.7.0.0/16 --proxy-mode=ipvs --ipvs-scheduler=nq --kubeconfig ../conf/kube-proxy.kubeconfig
# 授权
chmod +x /usr/local/kubernetes/server/bin/kube-proxy.sh
6.3.6 创建kube-proxy的supervisor配置
# vim etc/supervisord.d/kube-proxy.ini [program:kube-proxy] command=sh /usr/local/kubernetes/server/bin/kube-proxy.sh numprocs=1 ; 启动进程数 (def 1) directory=/usr/local/kubernetes/server/bin autostart=true ; 是否自启 (default: true) autorestart=true ; 是否自动重启 (default: true) startsecs=30 ; 服务运行多久判断为成功(def. 1) startretries=3 ; 启动重试次数 (default 3) exitcodes=0,2 ; 退出状态码 (default 0,2) stopsignal=QUIT ; 退出信号 (default TERM) stopwaitsecs=10 ; 退出延迟时间 (default 10) user=root ; 运行用户 redirect_stderr=true ; 重定向错误输出到标准输出(def false) stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB) stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10) stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0) ;子进程还有子进程,需要添加这个参数,避免产生孤儿进程 killasgroup=true stopasgroup=true
6.3.7 启动服务并检查
mkdir -p /data/logs/kubernetes/kube-proxy supervisorctl update supervisorctl status [root@server02 conf]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 47h # 检查ipvs,是否新增了配置 yum install ipvsadm -y [root@server02 conf]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.1:443 nq -> 10.11.0.206:6443 Masq 1 0 0 -> 10.11.0.207:6443 Masq 1 0 0
6.3.8 部署所有节点
首先需拷贝kube-proxy.kubeconfig 到 server03.host.com的conf目录下
# 拷贝证书文件
cd /usr/local/kubernetes/server/bin/cert
scp server05:/opt/certs/kube-proxy-client.pem .
scp server05:/opt/certs/kube-proxy-client-key.pem .
# 拷贝配置文件
cd /usr/local/kubernetes/server/conf/
scp server02:/usr/local/kubernetes/server/conf/kube-proxy.kubeconfig .
其他不同的地方就一个主机名,都已经在前面说明了,略
7 验证kubernetes集群
7.1 在任意一个节点上创建一个资源配置清单
[root@server02 ~]# vim /root/nginx-ds.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ds spec: template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: harbor.chinasoft.com/public/mynginx ports: - containerPort: 80
7.2 应用资源配置,并检查
7.2.1 应用资源配置
kubectl create -f /root/nginx-ds.yaml
[root@server03 conf]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-j777c 1/1 Running 0 8s
nginx-ds-nwsd6 1/1 Running 0 8s
[root@server02 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ds-c7jdr 1/1 Running 12 75d 172.7.208.3 server03.host.com <none> <none> nginx-ds-g6xfv 1/1 Running 11 35d 172.7.207.4 server02.host.com <none> <none>
7.2.2 在另一台node节点上检查 kubectl get pods kubectl get pods -o wide curl 172.7.22.2 7.2.3 查看kubernetes是否搭建好 [root@server03 conf]# kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} controller-manager Healthy ok scheduler Healthy ok [root@server02 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION server02.host.com Ready master,node 6d1h v1.15.5 server03.host.com Ready <none> 6d1h v1.15.5 [root@server03 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-ds-j777c 1/1 Running 0 6m45s nginx-ds-nwsd6 1/1 Running 0 6m45s