• Kubernetes——centos8.0 使用kubeadm部署 k8sv1.18.20+etcdv3.3.10+flannelv0.10.0 高可用集群


    centos8.0 使用kubeadm部署 k8s-v1.18.20+etcd-v3.4.3+flannel-v0.10.0 高可用集群

    一、资源规划:

    主机名 IP地址 配置 角色 系统版本
    k8s-master01 10.100.12.168 2C2G master/Work/etcd centos8.0
    k8s-master02 10.100.12.200 2C2G master/Work/etcd centos8.0
    k8s-master-lb 10.100.12.103 - k8s-master-lb centos8.0
    k8s-node01 10.100.15.246 2C4G Work/etcd centos8.0
    k8s-node02 10.100.10.195 2C4G Work centos8.0

    二、环境初始化:

    所有主机都要做初始化操作

    2.1 停止所有主机 firewalld 防火墙 :

    systemctl disable --now firewalld
    systemctl disable --now dnsmasq
    systemctl disable --now NetworkManager
    systemctl disable --now iptables

    2.2 关闭 swap :

    swapoff -a 
    sed -i 's/.*swap.*/#&/' /etc/fstab

    2.3 关闭 selinux :

    setenforce  0
    sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

    2.4 根据规划,设置 hostname :

    hostnamectl set-hostname <hostname>

    2.5 添加本地hosts解析 :

    cat >> /etc/hosts << EOF
    10.100.12.168 k8s-master01
    10.100.10.200 k8s-master02
    10.100.10.103 k8s-master-lb
    10.100.15.246 k8s-node01
    10.100.10.195 k8s-node02
    EOF

    2.6 将桥接的 ipv4 流量传递到 iptables 的链 :

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    sysctl --system  # 生效

    2.7 时间同步 :

    yum install chrony -y
    systemctl restart chronyd.service
    systemctl enable --now chronyd.service
    chronyc -a makestep

    2.8 查看当前系统版本 :

    cat /etc/redhat-release 
    CentOS Linux release 8.0.1905 (Core) 

    2.9 查看当前系统内核版本 :

    uname -r
    4.18.0-80.el8.x86_64

    2.10 使用 elrepo 仓库 :

      这里使用ELRepo仓库,ELRepo 仓库是基于社区的用于企业级 Linux 仓库,提供对 RedHat Enterprise(RHEL)和其他基于 RHEL的 Linux 发行版(CentOS、Scientific、Fedora 等)的支持。ELRepo 聚焦于和硬件相关的软件包,包括文件系统驱动、显卡驱动、网络驱动、声卡驱动和摄像头驱动等。网址:http://elrepo.org/tiki/tiki-index.php :

      2.10.1 导入 elrepo 仓库的公共密钥:

    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

      2.10.2 安装 elrepo 仓库的 yum 源:

    yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y

    2.11 查看当前可用的系统内核安装包

    [root@k8s-master01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
    Last metadata expiration check: 0:14:52 ago on Fri 17 Dec 2021 05:23:37 PM CST.
    Available Packages
    bpftool.x86_64                                             5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-lt.x86_64                                           5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-core.x86_64                                      5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-devel.x86_64                                     5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-doc.noarch                                       5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-headers.x86_64                                   5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-modules.x86_64                                   5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-modules-extra.x86_64                             5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-tools.x86_64                                     5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-tools-libs.x86_64                                5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-lt-tools-libs-devel.x86_64                          5.4.166-1.el8.elrepo                          elrepo-kernel
    kernel-ml-devel.x86_64                                     5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-ml-doc.noarch                                       5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-ml-headers.x86_64                                   5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-ml-modules-extra.x86_64                             5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-ml-tools.x86_64                                     5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-ml-tools-libs.x86_64                                5.15.8-1.el8.elrepo                           elrepo-kernel
    kernel-ml-tools-libs-devel.x86_64                          5.15.8-1.el8.elrepo                           elrepo-kernel
    perf.x86_64                                                5.15.8-1.el8.elrepo                           elrepo-kernel
    python3-perf.x86_64                                        5.15.8-1.el8.elrepo                           elrepo-kernel
    [root@k8s-master01 ~]# 

    2.12 安装最新版内核 :

    yum --enablerepo=elrepo-kernel install kernel-ml -y

    2.13 设置以新的内核启动:

      0 表示最新安装的内核,设置为 0 表示以新版本内核启动:

    grub2-set-default 0

    2.14 生成 grub 配置文件并重启系统 :

    grub2-mkconfig -o /boot/grub2/grub.cfg
    reboot
    遇到报错:
    [root@k8s-master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    /usr/bin/grub2-editenv: error: environment block too small.
    -------
    解决办法:
    [root@k8s-master01 ~]# mv /boot/grub2/grubenv /home/bak
    [root@k8s-master01 ~]# grub2-editenv /boot/grub2/grubenv create
    [root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install kernel-ml
    Last metadata expiration check: 0:06:48 ago on Tue 16 Nov 2021 06:58:49 PM CST.
    Package kernel-ml-5.15.2-1.el8.elrepo.x86_64 is already installed.
    Dependencies resolved.
    Nothing to do.
    Complete!
    [root@k8s-master01 ~]# grub2-set-default 0
    [root@k8s-master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    done
    [root@k8s-master01 ~]# 

    2.15 验证新内核:

    旧内核版本: 4.18.0-80.el8.x86_64
    新内核版本: 5.15.8-1.el8.elrepo.x86_64

    2.16 查看系统中已安装的内核:

    [root@k8s-master01 ~]# rpm -qa | grep kernel
    kernel-ml-modules-5.15.2-1.el8.elrepo.x86_64
    kernel-core-4.18.0-80.el8.x86_64
    kernel-modules-4.18.0-80.el8.x86_64
    kernel-tools-libs-4.18.0-80.el8.x86_64
    kernel-4.18.0-80.el8.x86_64
    kernel-ml-core-5.15.2-1.el8.elrepo.x86_64
    kernel-ml-5.15.2-1.el8.elrepo.x86_64
    kernel-tools-4.18.0-80.el8.x86_64
    [root@k8s-master01 ~]# 

    2.17 删除旧内核

    yum remove -y kernel-core-4.18.0

    2.18 再查看系统中已安装的内核 

    [root@k8s-master01 ~]# rpm -qa | grep kernel
    kernel-ml-core-5.15.8-1.el8.elrepo.x86_64
    kernel-tools-libs-4.18.0-80.el8.x86_64
    kernel-ml-modules-5.15.8-1.el8.elrepo.x86_64
    kernel-ml-5.15.8-1.el8.elrepo.x86_64
    kernel-tools-4.18.0-80.el8.x86_64

      也可以安装 yum-utils 工具,当系统安装的内核大于3个时,会自动删除旧的内核版本:

    yum install yum-utils -y

    2.19 设置ulimit参数

    echo "* soft nofile 655360" >> /etc/security/limits.conf
    echo "* hard nofile 655360" >> /etc/security/limits.conf
    echo "* soft nproc 655360" >> /etc/security/limits.conf
    echo "* hard nproc 655360" >> /etc/security/limits.conf
    echo "* soft memlock unlimited" >> /etc/security/limits.conf
    echo "* hard memlock unlimited" >> /etc/security/limits.conf
    echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
    echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf

    2.20 安装ipvsadm

    yum install ipvsadm ipset sysstat conntrack libseccomp -y

      所有节点都要配置 ipvs 模块,在内核 4.19 版本 nf_conntrack_ipv4 已经改为 nf_conntrack,本例内核版本已经大于 4.19 版本,所以使用 nf_conntrack:

    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack
    modprobe -- ip_tables
    modprobe -- ip_set
    modprobe -- xt_set
    modprobe -- ipt_set
    modprobe -- ipt_rpfilter
    modprobe -- ipt_REJECT
    modprobe -- ipip

      检查是否加载,可以将其加入至开机自动加载(在目录 /etc/sysconfig/modules/k8s.modules 写上如上命令):

    more /etc/sysconfig/modules/k8s.modules
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack
    modprobe -- ip_tables
    modprobe -- ip_set
    modprobe -- xt_set
    modprobe -- ipt_set
    modprobe -- ipt_rpfilter
    modprobe -- ipt_REJECT
    modprobe -- ipip
    [root@k8s-master01 ~]# lsmod |grep -e ip_vs -e nf_conntrack
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          176128  1 ip_vs
    nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
    nf_defrag_ipv4         16384  1 nf_conntrack
    libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
    [root@k8s-master01 ~]# 

    2.21 开启一些K8S集群中必须的内核参数,所有节点配置K8S内核:

    cat <<EOF > /etc/sysctl.d/99-kubernetes.conf 
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    # 以下三个参数是 arp 缓存的 gc 阀值,相比默认值提高了,当内核维护的 arp 表过于庞大时候,可以考虑优化下,避免在某些场景下arp缓存溢出导致网络超时,参考:https://k8s.imroc.io/avoid/cases/arp-cache-overflow-causes-healthcheck-failed
    # 存在于 ARP 高速缓存中的最少层数,如果少于这个数,垃圾收集器将不会运行。缺省值是 128 
    net.ipv4.neigh.default.gc_thresh1 = 2048 
    # 保存在 ARP 高速缓存中的最多的记录软限制。垃圾收集器在开始收集前,允许记录数超过这个数字 5 秒。缺省值是 512 
    net.ipv4.neigh.default.gc_thresh2 = 4096 
    # 保存在 ARP 高速缓存中的最多记录的硬限制,一旦高速缓存中的数目高于此,垃圾收集器将马上运行。缺省值是 1024 
    net.ipv4.neigh.default.gc_thresh3 = 8192
    # 该参数用于设定系统中最多允许存在多少tcp套接字不被关联到任何一个用户文件句柄上
    net.ipv4.tcp_max_orphans = 32768
    # 在 TIME_WAIT 数量等于 tcp_max_tw_buckets 时,不会有新的 TIME_WAIT 产生
    net.ipv4.tcp_max_tw_buckets = 32768
    net.ipv4.ip_forward = 1
    # net.ipv4.tcp_tw_recycle 这个内核参数的作用是通过 PAWS 实现 TIME_WAIT 快速回收。在 PAWS 的理论基础上,如果内核保存 Per-Host 的最近接收时间戳,接收数据包时进行时间戳比对,就能避免 TIME_WAIT 意图解决的第二个问题:前一个连接的数据包在新连接中被当做有效数据包处理的情况。这样就没有必要维持 TIME_WAIT 状态 2 * MSL 的时间来等待数据包消失,仅需要等待足够的 RTO(超时重传),解决 ACK 丢失需要重传的情况,来达到快速回收TIME_WAIT状态连接的目的。但上述理论在多个客户端使用 NAT 访问服务器时会产生新的问题:同一个 NAT 背后的多个客户端时间戳是很难保持一致的( timestamp 机制使用的是系统启动相对时间),对于服务器来说,两台客户端主机各自建立的 TCP 连接表现为同一个对端 IP 的两个连接,按照 Per-Host 记录的最近接收时间戳会更新为两台客户端主机中时间戳较大的那个,而时间戳相对较小的客户端发出的所有数据包对服务器来说都是这台主机已过期的重复数据,因此会直接丢弃。这就是之前我描述的问题产生的根本原因,在公司的 NAT 防火墙内会有问题,而在防火墙外面就没有问题;设置 net.ipv4.tcp_tw_recycle=1 的服务器访问有问题,而没有进行内核参数优化的另一台服务器没有问题
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_timestamps = 0
    #允许TW sockets用于新的TCP连接
    net.ipv4.tcp_tw_reuse = 1
    vm.swappiness = 0
    # vm.overcommit_memory内存分配策略 
    # 0:表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 
    # 1:表示内核允许分配所有的物理内存,而不管当前的内存状态如何
    # 2:表示内核允许分配超过所有物理内存和交换空间总和的内存
    vm.overcommit_memory = 1
    # 等于0时,表示当内存耗尽时,内核会触发OOM killer杀掉最耗内存的进程
    vm.panic_on_oom = 0
    # 最大文件句柄
    vm.max_map_count = 262144
    # 表示同一用户同时最大可以创建的 inotify 实例 (每个实例可以有很多 watch) 
    fs.inotify.max_user_instances = 8192
    # 表示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量) 默认值 8192 在容器场景下偏小,在某些情况下可能会导致 inotify watch 数量耗尽,使得创建 Pod 不成功或者 kubelet 无法启动成功,将其优化到 524288
    fs.inotify.max_user_watches = 1048576
    # 系统级别文件句柄设置
    fs.file-max = 52706963
    fs.nr_open = 52706963
    net.ipv6.conf.all.disable_ipv6 = 1
    # 查看established连接状态最多保留几天,默认是432000秒,就是5天
    net.netfilter.nf_conntrack_tcp_timeout_established = 7200
    # 此参数表示是否允许服务绑定一个本机不存在的IP地址
    net.ipv4.ip_nonlocal_bind = 1
    # 保存在 ARP 高速缓存中的最多记录的硬限制,一旦高速缓存中的数目高于此,垃圾收集器将马上运行。缺省值是 1024 
    net.ipv4.neigh.default.gc_thresh3 = 8192
    # 最大跟踪连接数,默认 nf_conntrack_buckets * 4
    net.nf_conntrack_max = 1048576
    # 允许的最大跟踪连接条目,是在内核内存中 netfilter 可以同时处理的“任务”(连接跟踪条目
    net.netfilter.nf_conntrack_max = 2310720
    # tcp_max_syn_backlog是指定所能接受SYN同步包的最大客户端数量,即半连接上限,默认值是128,即SYN_REVD状态的连接数
    net.ipv4.tcp_max_syn_backlog = 8096
    # 哈希表大小(只读)(64位系统、8G内存默认 65536,16G翻倍,如此类推)net.netfilter.nf_conntrack_buckets 不能直接改(报错)需要修改模块的设置:echo 65536 > /sys/module/nf_conntrack/parameters/hashsize
    net.netfilter.nf_conntrack_buckets = 65536
    # 每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许送到队列的数据包的最大数目
    net.core.netdev_max_backlog = 10000
    # 表示socket监听(listen)的backlog上限,也就是就是socket的监听队列(accept queue),当一个tcp连接尚未被处理或建立时(半连接状态),会保存在这个监听队列,默认为 128,在高并发场景下偏小,优化到 32768。参考 https://imroc.io/posts/kubernetes-overflow-and-drop/
    net.core.somaxconn = 32768
    # PID 与线程限制
    kernel.pid_max=65535
    kernel.threads-max=65535
    EOF

    三、部署 docker Engine:

    所有主机都要安装 docker engine,官方安装步骤文档: https://docs.docker.com/engine/install/centos

    3.1 操作系统版本要求

      To install Docker Engine, you need a maintained version of CentOS 7 or 8. Archived versions aren’t supported or tested.

    The centos-extras repository must be enabled. This repository is enabled by default, but if you have disabled it, you need to re-enable it.

    The overlay2 storage driver is recommended.

    3.2 卸载老版本的 docker 

    sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-engine

    3.3 配置 yum 源

     sudo yum install -y yum-utils
     sudo yum-config-manager \
        --add-repo \
        https://download.docker.com/linux/centos/docker-ce.repo

    3.4 yum 安装最新的 docker engine

    yum install -y docker-ce-3:19.03.15-3.el8.x86_64 docker-ce-cli-1:19.03.15-3.el8.x86_64 containerd.io-1.4.12-3.1.el8.x86_64 
    (做到后面时遇到docker版本太高了,kubernetes v1.18.20 latest 支持的 docker 版本是 docker-ce19.03,就卸载了docker-ce-20版本,重新安装了docker-ce-19.03)
    (之前安装 docker-ce 的时候遇到报错,因为之前安装时候已经解决了,重新安装 docker-ce 没有遇到报错,但还是记录下解决办法,读者如果 kubernetes 版本如果和我一样,安装 docker-ce 的时候,
    请执行本条特定 docker 版本的 yum 命令来安装)
    遇到报错:
    [root@k8s-master01 ~]# sudo yum install docker-ce docker-ce-cli containerd.io
    Docker CE Stable - x86_64                                                                               14 kB/s | 3.5 kB     00:00    
    Error: 
     Problem 1: problem with installed package podman-1.0.0-2.git921f98f.module_el8.0.0+58+91b614e7.x86_64
      - package podman-1.0.0-2.git921f98f.module_el8.0.0+58+91b614e7.x86_64 requires runc, but none of the providers can be installed
      - package podman-3.2.3-0.11.module_el8.4.0+942+d25aada8.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
      - package podman-3.2.3-0.10.module_el8.4.0+886+c9a8d9ad.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
      - package podman-3.0.1-7.module_el8.4.0+830+8027e1c4.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
      - package podman-3.0.1-6.module_el8.4.0+781+acf4c33b.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - cannot install the best candidate for the job
      - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.3.0+569+1bada2e4.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-65.rc10.module_el8.4.0+819+4afbd1d6.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-70.rc92.module_el8.4.0+786+4668b267.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-71.rc92.module_el8.4.0+833+9763146c.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-72.rc92.module_el8.4.0+964+56b6762f.x86_64 is filtered out by modular filtering
     Problem 2: package buildah-1.19.7-1.module_el8.4.0+781+acf4c33b.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
      - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package docker-ce-3:20.10.10-3.el8.x86_64 requires containerd.io >= 1.4.1, but none of the providers can be installed
      - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - problem with installed package buildah-1.5-3.gite94b4f9.module_el8.0.0+58+91b614e7.x86_64
      - package buildah-1.19.7-2.module_el8.4.0+830+8027e1c4.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
      - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
      - cannot install the best candidate for the job
      - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.3.0+569+1bada2e4.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-65.rc10.module_el8.4.0+819+4afbd1d6.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-70.rc92.module_el8.4.0+786+4668b267.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-71.rc92.module_el8.4.0+833+9763146c.x86_64 is filtered out by modular filtering
      - package runc-1.0.0-72.rc92.module_el8.4.0+964+56b6762f.x86_64 is filtered out by modular filtering
      - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
      - package buildah-1.21.4-1.module_el8.4.0+886+c9a8d9ad.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
      - package buildah-1.21.4-2.module_el8.4.0+942+d25aada8.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
      - package buildah-1.5-3.gite94b4f9.module_el8.0.0+58+91b614e7.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    解决办法:
    [root@k8s-master01 ~]# sudo yum install -y --allowerasing
    docker-ce-3:19.03.15-3.el8.x86_64 docker-ce-cli-1:19.03.15-3.el8.x86_64 containerd.io-1.4.12-3.1.el8.x86_64

    3.5 yum 安装指定版本 docker engine

    [root@k8s-master01 ~]# yum list docker-ce --showduplicates | sort -r
    Last metadata expiration check: 0:07:10 ago on Wed 17 Nov 2021 11:46:17 AM CST.
    Installed Packages
    docker-ce.x86_64               3:20.10.9-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.8-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.7-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.6-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.5-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.4-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.3-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.2-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.1-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:20.10.10-3.el8                docker-ce-stable 
    docker-ce.x86_64               3:20.10.10-3.el8                @docker-ce-stable
    docker-ce.x86_64               3:20.10.0-3.el8                 docker-ce-stable 
    docker-ce.x86_64               3:19.03.15-3.el8                docker-ce-stable 
    docker-ce.x86_64               3:19.03.14-3.el8                docker-ce-stable 
    docker-ce.x86_64               3:19.03.13-3.el8                docker-ce-stable 
    Available Packages
    [root@k8s-master01 ~]# 

    3.6 添加阿里云 docker 镜像加速

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "registry-mirrors": ["https://zp4fac78.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload

    3.7 启动 docker engine

    systemctl start docker 
    systemctl enable docker 
    systemctl status docker

    四、部署 kubeadm、kubectl、kubelet

      4.1 使用 kubernetes repo 仓库

    cat <<EOF >/etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

      4.2 查看可用 kubeadm 组件版本

    [root@k8s-master01 yum.repos.d]# yum list  kubeadm.x86_64 --showduplicates | sort -r
    Last metadata expiration check: 0:00:01 ago on Fri 17 Dec 2021 06:23:15 PM CST.
    Kubernetes                                      957 kB/s | 136 kB     00:00    
    kubeadm.x86_64                       1.9.9-0                          kubernetes
    kubeadm.x86_64                       1.9.8-0                          kubernetes
    kubeadm.x86_64                       1.9.7-0                          kubernetes
    kubeadm.x86_64                       1.9.6-0                          kubernetes
    kubeadm.x86_64                       1.9.5-0                          kubernetes
    kubeadm.x86_64                       1.9.4-0                          kubernetes
    kubeadm.x86_64                       1.9.3-0                          kubernetes
    kubeadm.x86_64                       1.9.2-0                          kubernetes
    kubeadm.x86_64                       1.9.11-0                         kubernetes
    kubeadm.x86_64                       1.9.1-0                          kubernetes
    kubeadm.x86_64                       1.9.10-0                         kubernetes
    kubeadm.x86_64                       1.9.0-0                          kubernetes
    kubeadm.x86_64                       1.8.9-0                          kubernetes
    kubeadm.x86_64                       1.8.8-0                          kubernetes
    kubeadm.x86_64                       1.8.7-0                          kubernetes
    kubeadm.x86_64                       1.8.6-0                          kubernetes
    kubeadm.x86_64                       1.8.5-0                          kubernetes
    kubeadm.x86_64                       1.8.4-0                          kubernetes
    kubeadm.x86_64                       1.8.3-0                          kubernetes
    kubeadm.x86_64                       1.8.2-0                          kubernetes
    kubeadm.x86_64                       1.8.15-0                         kubernetes
    kubeadm.x86_64                       1.8.14-0                         kubernetes
    kubeadm.x86_64                       1.8.13-0                         kubernetes
    kubeadm.x86_64                       1.8.12-0                         kubernetes
    kubeadm.x86_64                       1.8.11-0                         kubernetes
    kubeadm.x86_64                       1.8.1-0                          kubernetes
    kubeadm.x86_64                       1.8.10-0                         kubernetes
    kubeadm.x86_64                       1.8.0-1                          kubernetes
    kubeadm.x86_64                       1.8.0-0                          kubernetes
    kubeadm.x86_64                       1.7.9-0                          kubernetes
    kubeadm.x86_64                       1.7.8-1                          kubernetes
    kubeadm.x86_64                       1.7.7-1                          kubernetes
    kubeadm.x86_64                       1.7.6-1                          kubernetes
    kubeadm.x86_64                       1.7.5-0                          kubernetes
    kubeadm.x86_64                       1.7.4-0                          kubernetes
    kubeadm.x86_64                       1.7.3-1                          kubernetes
    kubeadm.x86_64                       1.7.2-0                          kubernetes
    kubeadm.x86_64                       1.7.16-0                         kubernetes
    kubeadm.x86_64                       1.7.15-0                         kubernetes
    kubeadm.x86_64                       1.7.14-0                         kubernetes
    kubeadm.x86_64                       1.7.11-0                         kubernetes
    kubeadm.x86_64                       1.7.1-0                          kubernetes
    kubeadm.x86_64                       1.7.10-0                         kubernetes
    kubeadm.x86_64                       1.7.0-0                          kubernetes
    kubeadm.x86_64                       1.6.9-0                          kubernetes
    kubeadm.x86_64                       1.6.8-0                          kubernetes
    kubeadm.x86_64                       1.6.7-0                          kubernetes
    kubeadm.x86_64                       1.6.6-0                          kubernetes
    kubeadm.x86_64                       1.6.5-0                          kubernetes
    kubeadm.x86_64                       1.6.4-0                          kubernetes
    kubeadm.x86_64                       1.6.3-0                          kubernetes
    kubeadm.x86_64                       1.6.2-0                          kubernetes
    kubeadm.x86_64                       1.6.13-0                         kubernetes
    kubeadm.x86_64                       1.6.12-0                         kubernetes
    kubeadm.x86_64                       1.6.11-0                         kubernetes
    kubeadm.x86_64                       1.6.1-0                          kubernetes
    kubeadm.x86_64                       1.6.10-0                         kubernetes
    kubeadm.x86_64                       1.6.0-0                          kubernetes
    kubeadm.x86_64                       1.23.1-0                         kubernetes
    kubeadm.x86_64                       1.23.0-0                         kubernetes
    kubeadm.x86_64                       1.22.5-0                         kubernetes
    kubeadm.x86_64                       1.22.4-0                         kubernetes
    kubeadm.x86_64                       1.22.3-0                         kubernetes
    kubeadm.x86_64                       1.22.2-0                         kubernetes
    kubeadm.x86_64                       1.22.1-0                         kubernetes
    kubeadm.x86_64                       1.22.0-0                         kubernetes
    kubeadm.x86_64                       1.21.8-0                         kubernetes
    kubeadm.x86_64                       1.21.7-0                         kubernetes
    kubeadm.x86_64                       1.21.6-0                         kubernetes
    kubeadm.x86_64                       1.21.5-0                         kubernetes
    kubeadm.x86_64                       1.21.4-0                         kubernetes
    kubeadm.x86_64                       1.21.3-0                         kubernetes
    kubeadm.x86_64                       1.21.2-0                         kubernetes
    kubeadm.x86_64                       1.21.1-0                         kubernetes
    kubeadm.x86_64                       1.21.0-0                         kubernetes
    kubeadm.x86_64                       1.20.9-0                         kubernetes
    kubeadm.x86_64                       1.20.8-0                         kubernetes
    kubeadm.x86_64                       1.20.7-0                         kubernetes
    kubeadm.x86_64                       1.20.6-0                         kubernetes
    kubeadm.x86_64                       1.20.5-0                         kubernetes
    kubeadm.x86_64                       1.20.4-0                         kubernetes
    kubeadm.x86_64                       1.20.2-0                         kubernetes
    kubeadm.x86_64                       1.20.14-0                        kubernetes
    kubeadm.x86_64                       1.20.13-0                        kubernetes
    kubeadm.x86_64                       1.20.12-0                        kubernetes
    kubeadm.x86_64                       1.20.11-0                        kubernetes
    kubeadm.x86_64                       1.20.1-0                         kubernetes
    kubeadm.x86_64                       1.20.10-0                        kubernetes
    kubeadm.x86_64                       1.20.0-0                         kubernetes
    kubeadm.x86_64                       1.19.9-0                         kubernetes
    kubeadm.x86_64                       1.19.8-0                         kubernetes
    kubeadm.x86_64                       1.19.7-0                         kubernetes
    kubeadm.x86_64                       1.19.6-0                         kubernetes
    kubeadm.x86_64                       1.19.5-0                         kubernetes
    kubeadm.x86_64                       1.19.4-0                         kubernetes
    kubeadm.x86_64                       1.19.3-0                         kubernetes
    kubeadm.x86_64                       1.19.2-0                         kubernetes
    kubeadm.x86_64                       1.19.16-0                        kubernetes
    kubeadm.x86_64                       1.19.15-0                        kubernetes
    kubeadm.x86_64                       1.19.14-0                        kubernetes
    kubeadm.x86_64                       1.19.13-0                        kubernetes
    kubeadm.x86_64                       1.19.12-0                        kubernetes
    kubeadm.x86_64                       1.19.11-0                        kubernetes
    kubeadm.x86_64                       1.19.1-0                         kubernetes
    kubeadm.x86_64                       1.19.10-0                        kubernetes
    kubeadm.x86_64                       1.19.0-0                         kubernetes
    kubeadm.x86_64                       1.18.9-0                         kubernetes
    kubeadm.x86_64                       1.18.8-0                         kubernetes
    kubeadm.x86_64                       1.18.6-0                         kubernetes
    kubeadm.x86_64                       1.18.5-0                         kubernetes
    kubeadm.x86_64                       1.18.4-1                         kubernetes
    kubeadm.x86_64                       1.18.4-0                         kubernetes
    kubeadm.x86_64                       1.18.3-0                         kubernetes
    kubeadm.x86_64                       1.18.2-0                         kubernetes
    kubeadm.x86_64                       1.18.20-0                        kubernetes
    kubeadm.x86_64                       1.18.19-0                        kubernetes
    kubeadm.x86_64                       1.18.18-0                        kubernetes
    kubeadm.x86_64                       1.18.17-0                        kubernetes
    kubeadm.x86_64                       1.18.16-0                        kubernetes
    kubeadm.x86_64                       1.18.15-0                        kubernetes
    kubeadm.x86_64                       1.18.14-0                        kubernetes
    kubeadm.x86_64                       1.18.13-0                        kubernetes
    kubeadm.x86_64                       1.18.12-0                        kubernetes
    kubeadm.x86_64                       1.18.1-0                         kubernetes
    kubeadm.x86_64                       1.18.10-0                        kubernetes
    kubeadm.x86_64                       1.18.0-0                         kubernetes
    kubeadm.x86_64                       1.17.9-0                         kubernetes
    kubeadm.x86_64                       1.17.8-0                         kubernetes
    kubeadm.x86_64                       1.17.7-1                         kubernetes
    kubeadm.x86_64                       1.17.7-0                         kubernetes
    kubeadm.x86_64                       1.17.6-0                         kubernetes
    kubeadm.x86_64                       1.17.5-0                         kubernetes
    kubeadm.x86_64                       1.17.4-0                         kubernetes
    kubeadm.x86_64                       1.17.3-0                         kubernetes
    kubeadm.x86_64                       1.17.2-0                         kubernetes
    kubeadm.x86_64                       1.17.17-0                        kubernetes
    kubeadm.x86_64                       1.17.16-0                        kubernetes
    kubeadm.x86_64                       1.17.15-0                        kubernetes
    kubeadm.x86_64                       1.17.14-0                        kubernetes
    kubeadm.x86_64                       1.17.13-0                        kubernetes
    kubeadm.x86_64                       1.17.12-0                        kubernetes
    kubeadm.x86_64                       1.17.11-0                        kubernetes
    kubeadm.x86_64                       1.17.1-0                         kubernetes
    kubeadm.x86_64                       1.17.0-0                         kubernetes
    kubeadm.x86_64                       1.16.9-0                         kubernetes
    kubeadm.x86_64                       1.16.8-0                         kubernetes
    kubeadm.x86_64                       1.16.7-0                         kubernetes
    kubeadm.x86_64                       1.16.6-0                         kubernetes
    kubeadm.x86_64                       1.16.5-0                         kubernetes
    kubeadm.x86_64                       1.16.4-0                         kubernetes
    kubeadm.x86_64                       1.16.3-0                         kubernetes
    kubeadm.x86_64                       1.16.2-0                         kubernetes
    kubeadm.x86_64                       1.16.15-0                        kubernetes
    kubeadm.x86_64                       1.16.14-0                        kubernetes
    kubeadm.x86_64                       1.16.13-0                        kubernetes
    kubeadm.x86_64                       1.16.12-0                        kubernetes
    kubeadm.x86_64                       1.16.11-1                        kubernetes
    kubeadm.x86_64                       1.16.11-0                        kubernetes
    kubeadm.x86_64                       1.16.1-0                         kubernetes
    kubeadm.x86_64                       1.16.10-0                        kubernetes
    kubeadm.x86_64                       1.16.0-0                         kubernetes
    kubeadm.x86_64                       1.15.9-0                         kubernetes
    kubeadm.x86_64                       1.15.8-0                         kubernetes
    kubeadm.x86_64                       1.15.7-0                         kubernetes
    kubeadm.x86_64                       1.15.6-0                         kubernetes
    kubeadm.x86_64                       1.15.5-0                         kubernetes
    kubeadm.x86_64                       1.15.4-0                         kubernetes
    kubeadm.x86_64                       1.15.3-0                         kubernetes
    kubeadm.x86_64                       1.15.2-0                         kubernetes
    kubeadm.x86_64                       1.15.12-0                        kubernetes
    kubeadm.x86_64                       1.15.11-0                        kubernetes
    kubeadm.x86_64                       1.15.1-0                         kubernetes
    kubeadm.x86_64                       1.15.10-0                        kubernetes
    kubeadm.x86_64                       1.15.0-0                         kubernetes
    kubeadm.x86_64                       1.14.9-0                         kubernetes
    kubeadm.x86_64                       1.14.8-0                         kubernetes
    kubeadm.x86_64                       1.14.7-0                         kubernetes
    kubeadm.x86_64                       1.14.6-0                         kubernetes
    kubeadm.x86_64                       1.14.5-0                         kubernetes
    kubeadm.x86_64                       1.14.4-0                         kubernetes
    kubeadm.x86_64                       1.14.3-0                         kubernetes
    kubeadm.x86_64                       1.14.2-0                         kubernetes
    kubeadm.x86_64                       1.14.1-0                         kubernetes
    kubeadm.x86_64                       1.14.10-0                        kubernetes
    kubeadm.x86_64                       1.14.0-0                         kubernetes
    kubeadm.x86_64                       1.13.9-0                         kubernetes
    kubeadm.x86_64                       1.13.8-0                         kubernetes
    kubeadm.x86_64                       1.13.7-0                         kubernetes
    kubeadm.x86_64                       1.13.6-0                         kubernetes
    kubeadm.x86_64                       1.13.5-0                         kubernetes
    kubeadm.x86_64                       1.13.4-0                         kubernetes
    kubeadm.x86_64                       1.13.3-0                         kubernetes
    kubeadm.x86_64                       1.13.2-0                         kubernetes
    kubeadm.x86_64                       1.13.12-0                        kubernetes
    kubeadm.x86_64                       1.13.11-0                        kubernetes
    kubeadm.x86_64                       1.13.1-0                         kubernetes
    kubeadm.x86_64                       1.13.10-0                        kubernetes
    kubeadm.x86_64                       1.13.0-0                         kubernetes
    kubeadm.x86_64                       1.12.9-0                         kubernetes
    kubeadm.x86_64                       1.12.8-0                         kubernetes
    kubeadm.x86_64                       1.12.7-0                         kubernetes
    kubeadm.x86_64                       1.12.6-0                         kubernetes
    kubeadm.x86_64                       1.12.5-0                         kubernetes
    kubeadm.x86_64                       1.12.4-0                         kubernetes
    kubeadm.x86_64                       1.12.3-0                         kubernetes
    kubeadm.x86_64                       1.12.2-0                         kubernetes
    kubeadm.x86_64                       1.12.1-0                         kubernetes
    kubeadm.x86_64                       1.12.10-0                        kubernetes
    kubeadm.x86_64                       1.12.0-0                         kubernetes
    kubeadm.x86_64                       1.11.9-0                         kubernetes
    kubeadm.x86_64                       1.11.8-0                         kubernetes
    kubeadm.x86_64                       1.11.7-0                         kubernetes
    kubeadm.x86_64                       1.11.6-0                         kubernetes
    kubeadm.x86_64                       1.11.5-0                         kubernetes
    kubeadm.x86_64                       1.11.4-0                         kubernetes
    kubeadm.x86_64                       1.11.3-0                         kubernetes
    kubeadm.x86_64                       1.11.2-0                         kubernetes
    kubeadm.x86_64                       1.11.1-0                         kubernetes
    kubeadm.x86_64                       1.11.10-0                        kubernetes
    kubeadm.x86_64                       1.11.0-0                         kubernetes
    kubeadm.x86_64                       1.10.9-0                         kubernetes
    kubeadm.x86_64                       1.10.8-0                         kubernetes
    kubeadm.x86_64                       1.10.7-0                         kubernetes
    kubeadm.x86_64                       1.10.6-0                         kubernetes
    kubeadm.x86_64                       1.10.5-0                         kubernetes
    kubeadm.x86_64                       1.10.4-0                         kubernetes
    kubeadm.x86_64                       1.10.3-0                         kubernetes
    kubeadm.x86_64                       1.10.2-0                         kubernetes
    kubeadm.x86_64                       1.10.13-0                        kubernetes
    kubeadm.x86_64                       1.10.12-0                        kubernetes
    kubeadm.x86_64                       1.10.11-0                        kubernetes
    kubeadm.x86_64                       1.10.1-0                         kubernetes
    kubeadm.x86_64                       1.10.10-0                        kubernetes
    kubeadm.x86_64                       1.10.0-0                         kubernetes
    Available Packages
    [root@k8s-master01 yum.repos.d]#

      4.3 安装指定 kubeadm 组件版本

      所有节点安装k8s组件。本例安装的为 1.18.20:

    [root@k8s-master01 yum.repos.d]# yum install -y kubeadm-1.18.20-0.x86_64 kubectl-1.18.20-0.x86_64 kubelet-1.18.20-0.x86_64
    Last metadata expiration check: 0:03:13 ago on Fri 17 Dec 2021 06:23:15 PM CST.
    Dependencies resolved.
    ======================================================================================================================
     Package                        Architecture           Version                       Repository                  Size
    ======================================================================================================================
    Installing:
     kubeadm                        x86_64                 1.18.20-0                     kubernetes                 8.8 M
     kubectl                        x86_64                 1.18.20-0                     kubernetes                 9.5 M
     kubelet                        x86_64                 1.18.20-0                     kubernetes                  21 M
    Installing dependencies:
     cri-tools                      x86_64                 1.19.0-0                      kubernetes                 5.7 M
     kubernetes-cni                 x86_64                 0.8.7-0                       kubernetes                  19 M
     socat                          x86_64                 1.7.4.1-1.el8                 AppStream                  323 k
    
    Transaction Summary
    ======================================================================================================================
    Install  6 Packages
    
    Total download size: 64 M
    Installed size: 268 M
    Downloading Packages:
    (1/6): socat-1.7.4.1-1.el8.x86_64.rpm                                                  28 MB/s | 323 kB     00:00    
    (2/6): 7b74bef0dca4f00ce1005168bdff8128479b15358b47b7f1514206789490c01a-kubeadm-1.18. 1.3 MB/s | 8.8 MB     00:06    
    (3/6): 67ffa375b03cea72703fe446ff00963919e8fce913fbc4bb86f06d1475a6bdf9-cri-tools-1.1 715 kB/s | 5.7 MB     00:08    
    (4/6): 16f7bea4bddbf51e2f5582bce368bf09d4d1ed98a82ca1e930e9fe183351a653-kubectl-1.18. 688 kB/s | 9.5 MB     00:14    
    (5/6): 942aea8dd81ddbe1873f7760007e31325c9740fa9f697565a83af778c22a419d-kubelet-1.18. 1.4 MB/s |  21 MB     00:15    
    (6/6): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cn 1.2 MB/s |  19 MB     00:15    
    ----------------------------------------------------------------------------------------------------------------------
    Total                                                                                 2.7 MB/s |  64 MB     00:23     
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                              1/1 
      Installing       : kubectl-1.18.20-0.x86_64                                                                     1/6 
      Installing       : cri-tools-1.19.0-0.x86_64                                                                    2/6 
      Installing       : socat-1.7.4.1-1.el8.x86_64                                                                   3/6 
      Installing       : kubernetes-cni-0.8.7-0.x86_64                                                                4/6 
      Installing       : kubelet-1.18.20-0.x86_64                                                                     5/6 
      Installing       : kubeadm-1.18.20-0.x86_64                                                                     6/6 
      Running scriptlet: kubeadm-1.18.20-0.x86_64                                                                     6/6 
      Verifying        : socat-1.7.4.1-1.el8.x86_64                                                                   1/6 
      Verifying        : cri-tools-1.19.0-0.x86_64                                                                    2/6 
      Verifying        : kubeadm-1.18.20-0.x86_64                                                                     3/6 
      Verifying        : kubectl-1.18.20-0.x86_64                                                                     4/6 
      Verifying        : kubelet-1.18.20-0.x86_64                                                                     5/6 
      Verifying        : kubernetes-cni-0.8.7-0.x86_64                                                                6/6 
    
    Installed:
      cri-tools-1.19.0-0.x86_64       kubeadm-1.18.20-0.x86_64     kubectl-1.18.20-0.x86_64   kubelet-1.18.20-0.x86_64  
      kubernetes-cni-0.8.7-0.x86_64   socat-1.7.4.1-1.el8.x86_64  
    
    Complete!
    [root@k8s-master01 yum.repos.d]# 

    4.4 配置 pause 镜像

        默认配置的 pause 镜像使用 gcr.io 仓库,国内可能无法访问,所以这里配置 kuberlet 使用阿里云的 pause 镜像,使用 kubeadm 初始化时会读取该文件的变量:

    DOCKER_CGROUPS=$(docker info | grep 'Cgroup Driver' | awk -F ' ' '{print $3}')
    cat >/etc/sysconfig/kubelet<<EOF
    KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
    EOF

     五、部署 HAProxy+Keepalived

      本节进行 kubernetes 集群初始化,主要目的是生成集群中用到的证书和配置文件。相比于二进制安装kubernetes,二进制安装过程中,证书和配置文件需要自行生成。

      本例高可用采用的 HAProxy + Keepalived。HAProxy 和 Keepalived 以守护进程的方式在所有 Master 节点部署。通过 yum 安装 HAProxy 和 Keepalived。

    yum install -y keepalived haproxy

      所有 Master 节点配置 HAProxy,所有 Master 节点的 HAProxy 配置相同:

    cat >/etc/haproxy/haproxy.cfg<<EOF
    global
      maxconn  2000
      ulimit-n  16384
      log  127.0.0.1 local0 err
      stats timeout 30s
    
    defaults
      log global
      mode  http
      option  httplog
      timeout connect 5000
      timeout client  50000
      timeout server  50000
      timeout http-request 15s
      timeout http-keep-alive 15s
    
    frontend monitor-in
      bind *:33305
      mode http
      option httplog
      monitor-uri /monitor
    
    listen stats
      bind    *:8006
      mode    http
      stats   enable
      stats   hide-version
      stats   uri       /stats
      stats   refresh   30s
      stats   realm     Haproxy\ Statistics
      stats   auth      admin:admin
    
    frontend k8s-master
      bind 0.0.0.0:16443
      bind 127.0.0.1:16443
      mode tcp
      option tcplog
      tcp-request inspect-delay 5s
      default_backend k8s-master
    
    backend k8s-master
      mode tcp
      option tcplog
      option tcp-check
      balance roundrobin
      default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
      server k8s-master01    10.100.12.168:6443  check
      server k8s-master02    10.100.10.200:6443  check
    EOF
    [root@k8s-master01 config]# systemctl enable --now haproxy
    Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
    [root@k8s-master01 config]# systemctl status haproxy
    ● haproxy.service - HAProxy Load Balancer
       Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-12-21 15:42:46 CST; 6s ago
      Process: 21373 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $OPTIONS (code=exited, status=0/SUCCESS)
     Main PID: 21376 (haproxy)
        Tasks: 2 (limit: 24768)
       Memory: 2.1M
       CGroup: /system.slice/haproxy.service
               ├─21376 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
               └─21378 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
    
    Dec 21 15:42:46 k8s-master01 systemd[1]: Starting HAProxy Load Balancer...
    Dec 21 15:42:46 k8s-master01 haproxy[21376]: [WARNING] 354/154246 (21376) : parsing [/etc/haproxy/haproxy.cfg:43] : b>
    Dec 21 15:42:46 k8s-master01 systemd[1]: Started HAProxy Load Balancer.
    lines 1-14/14 (END)

      所有 Master 节点配置 Keepalived。注意修改 服务器网卡、优先级、本机IP。

      k8s-master01 节点的 keepalived.conf 配置:

    cat >/etc/keepalived/keepalived.conf<<EOF
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 2
        weight -5
        fall 3  
        rise 2
    }
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        mcast_src_ip 10.100.12.168
        virtual_router_id 51
        priority 100
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass FMVm6NFFccY8WjhK
        }
        virtual_ipaddress {
            10.100.10.103
        }
    #    track_script {
    #       chk_apiserver
    #    }
    }
    EOF

      k8s-master02 节点的 keepalived.conf 配置:

    cat >/etc/keepalived/keepalived.conf<<EOF
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 2
        weight -5
        fall 3  
        rise 2
    }
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        mcast_src_ip 10.100.10.200
        virtual_router_id 51
        priority 101
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass FMVm6NFFccY8WjhK
        }
        virtual_ipaddress {
            10.100.10.103
        }
    #    track_script {
    #       chk_apiserver
    #    }
    }
    EOF

      k8s-master01 和 k8s-master02 节点的 script "/etc/keepalived/check_apiserver.sh":

    cat >/etc/keepalived/check_apiserver.sh <<EOF
    #!/bin/bash
    
    function check_apiserver() {
      for ((i=0;i<5;i++));do
        apiserver_job_id=$(pgrep kube-apiserver)
        if [[ ! -z $apiserver_job_id ]];then
           return
        else
           sleep 2
        fi
        apiserver_job_id=0
      done
    }
    
    # 1: running 0: stopped
    check_apiserver
    if [[ $apiserver_job_id -eq 0 ]]; then
        /usr/bin/systemctl stop keepalived
        exit 1
    else
        exit 0
    fi
    EOF

    chmod a+x /etc/keepalived/check_apiserver.sh

       注意:下述的健康检查是关闭的,集群建立完成后再开启。

    #    track_script {
    #       chk_apiserver
    #    }

      启动 haproxy 和 keepalived:

    systemctl enable --now haproxy
    systemctl enable --now keepalived
    systemctl status haproxy
    systemctl status keepalived
    高可用方式不一定非要采用 HAProxy 和 Keepalived,在阿里云上的话,可以使用阿里云SLB或者使用 Nginx 替换 HAProxy。

    六、集群初始化

      kubeadm 的安装方式可以配合使用 kubeadm-config 文件来初始化集群,所以需要提前创建各 Master 节点的 kubeadm-config。

      k8s-master01 节点 kubeadm-config:

    cat >/root/kubeadm-config.yaml<<EOF
    [root@k8s-master01 ~]# cat /root/kubeadm-config.yaml 
    apiVersion: kubeadm.k8s.io/v1alpha2
    kind: MasterConfiguration
    kubernetesVersion: v1.18.20
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    api:
      advertiseAddress: 10.100.12.168
      controlPlaneEndpoint: k8s-master-lb:16443
    controllerManagerExtraArgs:
      node-monitor-grace-period: 10s
      pod-eviction-timeout: 10s
    
    apiServerCertSANs:
    - 10.100.12.168
    - 10.100.10.200
    - 10.100.10.103
    - 10.100.15.246
    - 10.100.10.195
    - k8s-master01
    - k8s-master02
    - k8s-master-lb
    - k8s-node01
    - k8s-node02
    etcd:
      local:
        extraArgs:
          listen-client-urls: "https://127.0.0.1:2379,https://10.100.12.168:2379"
          advertise-client-urls: "https://10.100.12.168:2379"
          listen-peer-urls: "https://10.100.12.168:2380"
          initial-advertise-peer-urls: "https://10.100.12.168:2380"
          initial-cluster: "k8s-master01=https://10.100.12.168:2380"
        serverCertSANs:
          - k8s-master01
          - 10.100.12.168
        peerCertSANs:
          - k8s-master01
          - 10.100.12.168
    networking:
      podSubnet: "172.168.0.0/16"
    kubeProxy:
      config:
        featureGates:
          SupportIPVSProxyMode: true
        mode: ipvs
    [root@k8s-master01 ~]# 
    EOF

      也可以通过 kubeadm init 命令和参数来初始化,比如:

    kubeadm init --kubernetes-version=1.18.20 \
    --apiserver-advertise-address=10.100.12.168 \
    --image-repository registry.aliyuncs.com/google_containers \
    --service-cidr=10.0.0.0/24 \
    --pod-network-cidr=10.244.0.0/16

      不管是哪种,所有 Master 节点提前下载镜像,可以节省集群初始化时间:

    [root@k8s-master01 ~]# kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.20", GitCommit:"1f3e19b7beb1cc0110255668c4238ed63dadb7ad", GitTreeState:"clean", BuildDate:"2021-06-16T12:56:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    [root@k8s-master01 ~]# kubeadm config images list
    I1221 16:19:40.240143   21776 version.go:255] remote version is much newer: v1.23.1; falling back to: stable-1.18
    W1221 16:19:40.927615   21776 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    k8s.gcr.io/kube-apiserver:v1.18.20
    k8s.gcr.io/kube-controller-manager:v1.18.20
    k8s.gcr.io/kube-scheduler:v1.18.20
    k8s.gcr.io/kube-proxy:v1.18.20
    k8s.gcr.io/pause:3.2
    k8s.gcr.io/etcd:3.4.3-0
    k8s.gcr.io/coredns:1.6.7
    [root@k8s-master01 ~]# 

       我这里就不教怎么FQ获取 docker 镜像了。

    [root@k8s-master01 ~]# docker images
    REPOSITORY                           TAG        IMAGE ID       CREATED         SIZE
    k8s.gcr.io/kube-proxy                v1.18.20   27f8b8d51985   6 months ago    117MB
    k8s.gcr.io/kube-apiserver            v1.18.20   7d8d2960de69   6 months ago    173MB
    k8s.gcr.io/kube-controller-manager   v1.18.20   e7c545a60706   6 months ago    162MB
    k8s.gcr.io/kube-scheduler            v1.18.20   a05a1a79adaa   6 months ago    96.1MB
    k8s.gcr.io/pause                     3.2        80d28bedfe5d   22 months ago   683kB
    k8s.gcr.io/coredns                   1.6.7      67da37a9a360   23 months ago   43.8MB
    k8s.gcr.io/etcd                      3.4.3-0    303ce5db0e90   2 years ago     288MB
    [root@k8s-master01 ~]# 

      k8s-master01 节点初始化:

    [root@k8s-master01 ~]# kubeadm init --kubernetes-version=1.18.20 \
    > --apiserver-advertise-address=10.100.12.168 \
    > --image-repository registry.aliyuncs.com/google_containers \
    > --service-cidr=10.0.0.0/24 \
    > --pod-network-cidr=10.244.0.0/16
    W1221 17:45:58.099238   33676 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [init] Using Kubernetes version: v1.18.20
    [preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 10.100.12.168]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.100.12.168 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.100.12.168 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W1221 17:46:03.156623   33676 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W1221 17:46:03.158285   33676 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 21.502510 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: j5mlxo.6mbyk52tmdb77j6r
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.100.12.168:6443 --token j5mlxo.6mbyk52tmdb77j6r \
        --discovery-token-ca-cert-hash sha256:246287d3ea04d4b73f37f3694b432203ffbf3a00263858ee7181fcea4c905820 
    [root@k8s-master01 ~]#  
  • 相关阅读:
    Java编程基础
    Python开发【第十四篇】:Python操作MySQL
    MySQL(二)
    MySQL(一)
    Python之路【第五篇】:面向对象及相关
    Python开发【第四篇】:Python基础之函数
    Python开发【第三篇】:Python基本数据类型
    等保测评备案流程?备案资料有哪些?
    xls/csv文件转换成dbf文件
    csv 转换为DBF文件的方法
  • 原文地址:https://www.cnblogs.com/zuoyang/p/15703122.html
Copyright © 2020-2023  润新知