1 k8s 组件功能介绍
1.2 K8s 集群API: kube-apiserver
如果需要与您的 Kubernetes 集群进行交互,就要通过 API。是 Kubernetes 控制平面的前端,用于处理内部和外部请求。API 服务器会确定请求是否有效,如果有效,则对其进行处理。您可以通过 REST 调用、kubectl 命令行界面或其他dashboard工具来访问 API。
该端口默认值为6443,可通过启动参数“--secure-port”的值来修改默认值。
默认IP地址为非本地Non-Localhost网络端口,通过启动参数“--bind-address”设置该值
该端口用于接收客户端、dashboard等外部HTTPS请求。
用于其于Token文件或客户端证书及HTTP Base的认证。
用于基于策略的授权。
访问认证
身份认证token --> 验证权限 --> 验证指令 --> 执行操作 --> 返回结果。
(身份认证可以是证书、tocken或用户名密码)
kubernetes API测试
# kubectl get secrets -A | grep admin
# kubectl describe secrets admin-user-token-z487q
# curl --cacert /etc/kubernetes/ssl/ca.pem -H "Authorization:Bearer 一大串token字符" https://172.0.0.1:6443
# curl 省略 https://172.0.0.1:6443/ 返回所有的API列表
# curl 省略 https://172.0.0.1:6443/apis 分组API
# curl 省略 https://172.0.0.1:6443/api/v1 带具体版本号的API
# curl 省略 https://172.0.0.1:6443/version API版本信息
# curl 省略 https://172.0.0.1:6443/healthz/etcd 与etcd的心跳监测
# curl 省略 https://172.0.0.1:6443/apis/autoscaling/v1 API的详细信息
# curl 省略 https://172.0.0.1:6443/metrics 指标数据
1.3 K8s 调度程序:kube-scheduler
您的集群是否状况良好?如果需要新的容器,要将它们放在哪里?这些是 Kubernetes 调度程序所要关注的问题。
调度程序会考虑容器集的资源需求(例如 CPU 或内存)以及集群的运行状况。随后,它会将容器集安排到适当的计算节点。
1.4 K8s 控制器:kube-controller-manager
控制器负责实际运行集群,而 Kubernetes 控制器管理器则是将多个控制器功能合而为一。控制器用于查询调度程序,并确保有正确数量的容器集在运行。如果有容器集停止运行,另一个控制器会发现并做出响应。控制器会将服务连接至容器集,以便让请求前往正确的端点。还有一些控制器用于创建帐户和 API 访问令牌。
制器包括(副本控制器、节点控制器、命名空间控制器和服务账号控制器等),控制器作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群中的pod副本始终处理于预期的工作状态。
1.5 键值存储数据库 etcd
配置数据以及有关集群状态的信息位于 etcd(一个键值存储数据库)中。etcd 采用分布式、容错设计,被视为集群的最终事实来源。
1.6 K8s 节点
Kubernetes 集群中至少需要一个计算节点,但通常会有多个计算节点。容器集经过调度和编排后,就会在节点上运行。如果需要扩展集群的容量,那就要添加更多的节点。
1.6.1 容器集
容器集是 Kubernetes 对象模型中最小、最简单的单元。它代表了应用的单个实例。每个容器集都由一个容器(或一系列紧密耦合的容器)以及若干控制容器运行方式的选件组成。容器集可以连接至持久存储,以运行有状态应用。
1.6.2 容器运行时引擎
为了运行容器,每个计算节点都有一个容器运行时引擎。比如 Docker,但 Kubernetes 也支持其他符合开源容器运动(OCI)标准的运行时,例如 rkt 和 CRI-O。
1.6.3 kubelet
每个计算节点中都包含一个 kubelet,这是一个与控制平面通信的微型应用。kublet 可确保容器在容器集内运行。当控制平面需要在节点中执行某个操作时,kubelet 就会执行该操作。
是运行在每个worker节点的代理组件,它会监视已分配给节点的pod,具体功能如下:
- 向master汇报node节点的状态信息;
- 授受指令并在Pod中创建docker容器;
- 准备pod所需的数据卷;
- 返回pod的运行状态;
- 在node节点执行容器健康检查 ;
(负责POD/容器 生命周期,创建或删除pod)
1.6.3.1 常用命令
常用命令:
# kubectl get services --all-namespaces -o wide
# kubectl get pods --all-namespaces -o wide
# kubectl get nodes --all-namespaces -o wide
# kubectl get deployment --all-namespaces
# kubectl get deployment -n devpos -o wide 更改显示格式
# kubectl desribe pods devpos-tomcat-appy-deployment -n devpos 查看某个资源详细信息
# kubectl create -f tomcat-app1.yaml
# kubectl apply -f tomcat-app1.yaml
# kubectl delete -f tomcat-app1.yaml
# kubectl create -f tomcat-app1.yaml --save-config --record
# kubectl apply -f tomcat-app1.yaml --record 推荐命令
# kubectl exec -it devpos-tomcat-app1-deployment-aaabbb-ddd bash -n devpos
# kubectl logs devpos-tomcat-app1-deployment-aaabbb-ddd bash -n devpos
# kubectl delete pods devpos-tomcat-app1-deployment-aaabbb-ddd bash -n devpos
1.6.4 kube-proxy
kube-proxy:Kubernetes网络代理运行在node上,它反映了node上Kubernetes API中定义的服务,并可以通过一组后端进行简单的TCP、UDP和SCTP流转发或者在一组后端进行循环TCP、UDP和SCTP转发,用户必须使用apiserver API创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行连接转发实现Kubernetes服务访问。
kube-proxy运行在每个节点上,监听API Server中服务对象的变化,再通过管理Iptables或者IPVS规则来实现网络的转发。
2 k8s 中创建pod的调度流程
用户-> kubectl 发起命令请求-> 通过kubeconfig进行认证-> apiserver 认证-> apiserver 将yaml中的信息存储到etcd中-> controller-manager 这里判断是否是create、update-> scheduler 决定调度到那个工作节点 -> kubelet 汇报自身状态和watch apiserver 接口中的pod调度请求
3 基于二进制部署k8s集群网络组件、coredns、dashboard
集群规划
IP | 主机名称 |
---|---|
192.168.2.200 | kube-master |
192.168.2.201 | kube-node1 |
192.168.2.202 | kube-node2 |
192.168.2.203 | kube-node3 |
192.168.2.206 | kube-harbor01 |
3.1 部署harbor
(1)安装docker和docker-compose
root@harbor01:~# apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
#配置apt使用https
root@harbor01:~# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
#添加docker GPG key
root@harbor01:~# add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" 添加docker的stable安装源
root@harbor01:~# apt-get update
root@harbor01:~# apt install docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal
root@harbor01:~# apt install python3-pip
root@harbor01:~# pip3 install docker-compose
(2)下载安装harbor;
root@k8s-harbor1:~# mkdir -pv /etc/harbor/ssl/
root@k8s-harbor1:/etc/harbor/ssl/# ls
上传域名证书,这里是有自己的公网域名,从云厂商申请的免费证书;(自己签发证书命令,见文末)-rw-r--r-- 1 root root 1350 Sep 13 18:09 ca.pem
-rw-r--r-- 1 root root 1679 Sep 13 18:09 harbor-key.pem
-rw-r--r-- 1 root root 1444 Sep 13 18:09 harbor.pem
root@harbor01:~# cd /var/data
root@harbor01:/var/data# wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz
root@harbor01:/var/data# tar zxvf harbor-offline-installer-v2.3.2.tgz
root@harbor01:/var/data# ln -sv /var/data/harbor /usr/local/
/var/data/harbor'`-> '/usr/local/harbor'
root@harbor01:/var/data# cd /var/data/harbor
root@harbor01:/var/data# cp harbor.yml.tmpl harbor.yml
#设置hostname、注释https、设置网页登录密码
root@k8s-harbor1:/usr/local/harbor# grep -v "#" /var/data/harbor/harbor.yml|grep -v "^$"
hostname: harbor.yourdomain.com
https:
port: 443
certificate: /etc/harbor/ssl/harbor.pem
private_key: /etc/harbor/ssl/harbor-key.pem
harbor_admin_password: Harbor12345
database:
password: dzLtHS6vr7kZpCy_
max_idle_conns: 50
max_open_conns: 1000
data_volume: /var/data
clair:
updaters_interval: 12
trivy:
ignore_unfixed: false
skip_update: false
insecure: false
jobservice:
max_job_workers: 10
notification:
webhook_job_max_retry: 10
chart:
absolute_url: disabled
log:
level: info
local:
rotate_count: 3
rotate_size: 100M
location: /var/log/harbor
_version: 2.0.0
proxy:
http_proxy:
https_proxy:
no_proxy:
components:
- core
- jobservice
- clair
- trivy
root@harbor01:/var/data/harbor# ./install.sh --with-trivy 安装harbor
root@k8s-harbor1:/var/data/harbor# cat /usr/lib/systemd/system/harbor.service 设置Harbor开机启动
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/local/bin/docker-compose -f /var/data/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /var/data/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target
root@k8s-harbor1:/var/data/harbor# systemctl enable harbor.service
客户端验证,浏览器登录harbor创建项目;
echo "192.168.2.206 harbor.yourdomain.com" >> /etc/hosts
docker login https://harbor.yourdomain.com --username=admin --password=Harbor12345
自行签发证书20年(后续使用自签证书操作,云厂商免费证书只有1年有效期)
(1)自签发证书
root@k8s-harbor1:/etc/harbor/ssl# openssl genrsa -out harbor-key.pem
root@k8s-harbor1:/etc/harbor/ssl# openssl req -x509 -new -nodes -key harbor-key.pem -subj "/CN=harbor.yourdomain.com" -days 7120 -out harbor.pem
(2)使用自签发证书
root@k8s-harbor1:/usr/local/harbor# grep -v "#" /var/data/harbor/harbor.yml|grep -v "^$"
hostname: harbor.yourdomain.com
https:
port: 443
certificate: /etc/harbor/ssl/harbor.pem
private_key: /etc/harbor/ssl/harbor-key.pem
harbor_admin_password: Harbor12345
root@k8s-harbor1:# docker-compose start
(4)客户端浏览器访问harbor可以看到自签发的证书;为了在Linux docker客户端可以正常访问https://harbor.yourdomain.com,需要把自签发的crt文件拷贝到客户端;
root@node1:~# mkdir /etc/docker/certs.d/harbor.yourdomain.com -p
root@k8s-harbor1:/etc/harbor/ssl/# scp harbor.pem 192.168.2.200:/etc/docker/certs.d/harbor.yourdomain.com
在/etc/docker/daemon.json
中添加仓库harbor.yourdomain.com,重启docker重启;
root@kube-master:~# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
"insecure-registries": ["192.168.2.0/24"],
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "/var/lib/docker"
}
root@kube-master:~# systemctl restart docker
root@kube-master:~# docker login https://harbor.yourdomain.com --username=admin --password=Harbor12345
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
3.2 在部署节点安装ansible及准备ssh免密登陆
(1)安装ansible;
root@kube-master:~# apt install python3-pip -y
root@kube-master:~# pip3 install ansible
(2)在ansible中配置其它节点的免密登录,包含自己的;
root@kube-master:~# ssh-keygen
root@kube-master:~# apt install sshpass
root@kube-master:~# cat scp-key.sh
#!/bin/bash
IP="
192.168.2.200
192.168.2.201
192.168.2.202
192.168.2.203
192.168.2.206
"
for node in ${IP};do
sshpass -p 123123 ssh-copy-id ${node} -o StrictHostKeyChecking=no
if [ $? -eq 0 ];then
echo "${node} 秘钥copy完成"
else
echo "${node} 秘钥copy失败"
fi
done
root@kube-master:~# bash scp-key.sh
3.3 在部署节点编排k8s安装
网络组件、coredns、dashoard
3.3.1 下载kubeasz脚本并配置
(1)下载easzlab安装脚本和下载安装文件;
root@kube-master:~# export release=3.1.0
root@kube-master:~# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@kube-master:~# cp ezdown bak.ezdown
root@kube-master:~# vi ezdown
DOCKER_VER=19.03.15 #指定的docker版本
K8S_BIN_VER=v1.21.0 #指定k8s版本,这个脚本是到hub.docker.com/easzlab/kubeasz-k8s-bin:v1.21.0对应的版本,可以到hub.docker.com查询要使用的版本;
BASE="/etc/kubeasz" #下载相关配置文件和镜像
option: -{DdekSz}
-C stop&clean all local containers
-D download all into "$BASE"
-P download system packages for offline installing
-R download Registry(harbor) offline installer
-S start kubeasz in a container
-d <ver> set docker-ce version, default "$DOCKER_VER"
-e <ver> set kubeasz-ext-bin version, default "$EXT_BIN_VER"
-k <ver> set kubeasz-k8s-bin version, default "$K8S_BIN_VER"
-m <str> set docker registry mirrors, default "CN"(used in Mainland,China)
-p <ver> set kubeasz-sys-pkg version, default "$SYS_PKG_VER"
-z <ver> set kubeasz version, default "$KUBEASZ_VER"
root@kube-master:~# chmod +x ezdown
root@kube-master:~# bash ./ezdown -D 会下载所有文件,默认是下载到/etc/kubeasz目录下;
root@kube-master:~# cd /etc/kubeasz/
root@kube-master:/etc/kubeasz# ls
README.md ansible.cfg bin docs down example ezctl ezdown manifests pics playbooks roles tools
root@kube-master:/etc/kubeasz# ./ezctl new k8s-01
2021-09-12 16:36:36 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-01
2021-09-12 16:36:36 DEBUG set version of common plugins
2021-09-12 16:36:36 DEBUG cluster k8s-01: files successfully created.
2021-09-12 16:36:36 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-01/hosts'
2021-09-12 16:36:36 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-01/config.yml'
root@kube-master:/etc/kubeasz#
root@kube-master:/etc/kubeasz# tree clusters/
clusters/
└── k8s-01
├── config.yml
└── hosts
(2)修改配置文件hosts
root@kube-master:~# cat /etc/kubeasz/clusters/k8s-01/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
192.168.2.200
# master node(s)
[kube_master]
192.168.2.200
# work node(s)
[kube_node]
192.168.2.201
192.168.2.202
192.168.2.203
192.168.2.204
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.2.206 NEW_INSTALL=true
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
#192.168.1.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"
# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
(3)镜像下载到本地,上传到自建harbor仓库;这里做的操作是给第4部的配置文件部分使用;
root@kube-master:~# docker pull easzlab/pause-amd64:3.4.1
root@kube-master:~# docker tag easzlab/pause-amd64:3.4.1 harbor.yourdomain.com/baseimages/pause-amd64:3.4.1
root@kube-master:~# docker push harbor.yourdomain.com/baseimages/pause-amd64:3.4.1
The push refers to repository [harbor.yourdomain.com/baseimages/pause-amd64]
915e8870f7d1: Pushed
3.4.1: digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d size: 526
(4)修改集群目录配置文件config.yml
root@kube-master:/etc/kubeasz# vi clusters/k8s-01/config.yml
# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor.yourdomain.com/baseimages/pause-amd64:3.4.1"
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# node节点最大pod 数
MAX_PODS: 300
# [docker]信任的HTTP仓库
INSECURE_REG: '["192.168.2.0/24"]'
# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: false
# metric server 自动安装
metricsserver_install: "yes"
# dashboard 自动安装
dashboard_install: "yes"
# ingress 自动安装
ingress_install: "no"
# prometheus 自动安装
prom_install: "no"
3.3.3 修改模板和部分配置
# 调用的ansible脚本如下
root@kube-master:~# tree /etc/kubeasz/roles/prepare/
/etc/kubeasz/roles/prepare/
├── files
│ └── sctp.conf
├── tasks
│ ├── centos.yml
│ ├── common.yml
│ ├── main.yml
│ ├── offline.yml
│ └── ubuntu.yml
└── templates
├── 10-k8s-modules.conf.j2
├── 30-k8s-ulimits.conf.j2
├── 95-k8s-journald.conf.j2
└── 95-k8s-sysctl.conf.j2
3 directories, 10 files
root@kube-master:~# ls /etc/kubeasz/roles/
calico chrony cilium clean cluster-addon cluster-restore containerd deploy docker etcd ex-lb flannel harbor kube-lb kube-master kube-node kube-ovn kube-router os-harden prepare
root@kube-master:~# ls /etc/kubeasz/roles/deploy/tasks/
add-custom-kubectl-kubeconfig.yml create-kube-controller-manager-kubeconfig.yml create-kube-proxy-kubeconfig.yml create-kube-scheduler-kubeconfig.yml create-kubectl-kubeconfig.yml main.yml
# 验证etcd当前状态
root@kube-master:~# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.200:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health
https://192.168.2.200:2379 is healthy: successfully committed proposal: took = 11.923478ms
# 查看当前支持的运行时
root@kube-master:~# cat /etc/kubeasz/playbooks/03.runtime.yml
# to install a container runtime
- hosts:
- kube_master
- kube_node
roles:
- { role: docker, when: "CONTAINER_RUNTIME == 'docker'" }
- { role: containerd, when: "CONTAINER_RUNTIME == 'containerd'" }
# 查看docker的daemon.json模板
root@kube-master:~# cat /etc/kubeasz/roles/docker/templates/daemon.json.j2
{
"data-root": "{{ DOCKER_STORAGE_DIR }}",
"exec-opts": ["native.cgroupdriver={{ CGROUP_DRIVER }}"],
{% if ENABLE_MIRROR_REGISTRY %}
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
{% endif %}
{% if ENABLE_REMOTE_API %}
"hosts": ["tcp://0.0.0.0:2376", "unix:///var/run/docker.sock"],
{% endif %}
"insecure-registries": {{ INSECURE_REG }},
"max-concurrent-downloads": 10,
"live-restore": true,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "50m",
"max-file": "1"
},
"storage-driver": "overlay2"
}
# docker的service模板文件路径
root@kube-master:~# cat /etc/kubeasz/roles/docker/templates/docker.service.j2
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Environment="PATH={{ bin_dir }}:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart={{ bin_dir }}/dockerd # --iptables=false
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=always
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
# 默认会给master设置不可调度(设警线)
root@kube-master:~# cat /etc/kubeasz/playbooks/04.kube-master.yml
# to set up 'kube_master' nodes
- hosts: kube_master
roles:
- kube-lb
- kube-master
- kube-node
tasks:
- name: Making master nodes SchedulingDisabled
shell: "{{ bin_dir }}/kubectl cordon {{ inventory_hostname }} "
when: "inventory_hostname not in groups['kube_node']"
ignore_errors: true
- name: Setting master role name
shell: "{{ bin_dir }}/kubectl label node {{ inventory_hostname }} kubernetes.io/role=master --overwrite"
ignore_errors: true
# 可以替换一些基础镜像加快构建
/etc/kubeasz/clusters/k8s-01/config.yml
[containerd]基础容器镜像
SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"
# kube-proxy 初始化安装前可以指定scheduler算法,根据安装需求设置;不修改也可以
/etc/kubeaszk/roles/kube-node/templates/kube-proxy-config.yaml.j2
比如设置
mode: "{{ PROXY_MODE }}"
ipvs:
scheduler: wrr 默认是rr
# 使用自己仓库的network镜像
cat /etc/kubeasz/playbooks/06.network.yml
# to install network plugin, only one can be choosen
- hosts:
- kube_master
- kube_node
roles:
- { role: calico, when: "CLUSTER_NETWORK == 'calico'" }
- { role: cilium, when: "CLUSTER_NETWORK == 'cilium'" }
- { role: flannel, when: "CLUSTER_NETWORK == 'flannel'" }
- { role: kube-router, when: "CLUSTER_NETWORK == 'kube-router'" }
- { role: kube-ovn, when: "CLUSTER_NETWORK == 'kube-ovn'" }
cat /etc/kubeasz/roles/calico/templates/
calico-csr.json.j2 calico-v3.15.yaml.j2 calico-v3.3.yaml.j2 calico-v3.4.yaml.j2 calico-v3.8.yaml.j2 calicoctl.cfg.j2
vim /etc/kubeasz/roles/calico/templates/calico-v3.15.yaml.j2
image: calico/kube-controllers:v3.15.3 ==>harbor.yourdomain.com/baseimages/calico-kube-controllers:v3.15.3
image: calico/cni:v3.15.3 ==> harbor.yourdomain.com/baseimages/calico-cni:v3.15.3
image: calico/pod2daemon-flexvol:v3.15.3 ==>harbor.yourdomain.com/baseimages/calico-pod2daemon-flexvol:v3.15.3
image: calico/node:v3.15.3 ==>harbor.yourdomain.com/baseimages/calico-node:v3.15.3
# coredns 模板位置
root@kube-master:~# cat /etc/kubeasz/clusters/k8s-01/yml/coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
serviceAccountName: coredns
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.8.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.68.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
# dashboard的模板位置
root@kube-master:~# cat /etc/kubeasz/roles/cluster-addon/templates/dashboard/
admin-user-sa-rbac.yaml kubernetes-dashboard.yaml read-user-sa-rbac.yaml
root@kube-master:~# cat /etc/kubeasz/roles/cluster-addon/templates/dashboard/kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.1.0
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kube-system
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.6
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}