• 【k8s部署】6. 部署 worker 节点


    如果没有特殊指明,所有操作均在 zhaoyixin-k8s-01 节点上执行。

    kubernetes worker 节点运行如下组件:

    • containerd
    • kubelet
    • kube-proxy
    • calico
    • kube-nginx

    0.安装依赖包

    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "yum install -y epel-release" &
        ssh root@${node_ip} "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git" &
      done
    

    1.apiserver 高可用

    本节使用 nginx 4 层透明代理功能实现 Kubernetes worker 节点组件高可用访问 kube-apiserver 集群。

    基于 nginx 代理的 kube-apiserver 高可用方案

    • 控制节点的 kube-controller-manager、kube-scheduler 是多实例部署且连接本机的 kube-apiserver,所以只要有一个实例正常,就可以保证高可用;
    • 集群内的 Pod 使用 K8S 服务域名 kubernetes 访问 kube-apiserver, kube-dns 会自动解析出多个 kube-apiserver 节点的 IP,所以也是高可用的;
    • 在每个节点起一个 nginx 进程,后端对接多个 apiserver 实例,nginx 对它们做健康检查和负载均衡;
    • kubelet、kube-proxy 通过本地的 nginx(监听 127.0.0.1)访问 kube-apiserver,从而实现 kube-apiserver 的高可用。

    下载和编译 nginx

    下载源码:

    cd /opt/k8s/work
    wget http://nginx.org/download/nginx-1.15.3.tar.gz
    tar -xzvf nginx-1.15.3.tar.gz
    

    配置编译参数:

    cd /opt/k8s/work/nginx-1.15.3
    mkdir nginx-prefix
    yum install -y gcc make
    ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
    
    • --with-stream:开启 4 层透明转发(TCP Proxy)功能;
    • --without-xxx:关闭所有其他功能,这样生成的动态链接二进制程序依赖最小;

    输出:

    Configuration summary
      + PCRE library is not used
      + OpenSSL library is not used
      + zlib library is not used
    
      nginx path prefix: "/opt/k8s/work/nginx-1.15.3/nginx-prefix"
      nginx binary file: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx"
      nginx modules path: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/modules"
      nginx configuration prefix: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/conf"
      nginx configuration file: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/conf/nginx.conf"
      nginx pid file: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/logs/nginx.pid"
      nginx error log file: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/logs/error.log"
      nginx http access log file: "/opt/k8s/work/nginx-1.15.3/nginx-prefix/logs/access.log"
      nginx http client request body temporary files: "client_body_temp"
      nginx http proxy temporary files: "proxy_temp"
    

    编译和安装:

    cd /opt/k8s/work/nginx-1.15.3
    make && make install
    

    验证编译安装的 nginx

    cd /opt/k8s/work/nginx-1.15.3
    ./nginx-prefix/sbin/nginx -v
    nginx version: nginx/1.15.3
    

    安装和部署 nginx

    创建目录结构:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
      done
    

    拷贝二进制程序,并重命名为 kube-nginx。

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
        scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx  root@${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
        ssh root@${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
      done
    

    配置 nginx,开启 4 层透明转发功能:

    cd /opt/k8s/work
    cat > kube-nginx.conf << EOF
    worker_processes 1;
    
    events {
        worker_connections  1024;
    }
    
    stream {
        upstream backend {
            hash $remote_addr consistent;
            server 192.168.16.8:6443        max_fails=3 fail_timeout=30s;
            server 192.168.16.10:6443        max_fails=3 fail_timeout=30s;
            server 192.168.16.6:6443        max_fails=3 fail_timeout=30s;
        }
    
        server {
            listen 127.0.0.1:8443;
            proxy_connect_timeout 1s;
            proxy_pass backend;
        }
    }
    EOF
    
    • upstream backend 中的 server 列表为集群中各 kube-apiserver 的节点 IP,需要根据实际情况修改;

    分发配置文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp kube-nginx.conf  root@${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.conf
      done
    

    配置 systemd unit 文件,启动服务

    配置 kube-nginx systemd unit 文件:

    cd /opt/k8s/work
    cat > kube-nginx.service <<EOF
    [Unit]
    Description=kube-apiserver nginx proxy
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=forking
    ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
    ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
    ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
    PrivateTmp=true
    Restart=always
    RestartSec=5
    StartLimitInterval=0
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    分发 systemd unit 文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp kube-nginx.service  root@${node_ip}:/etc/systemd/system/
      done
    

    启动 kube-nginx 服务:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx"
      done
    

    检查 kube-nginx 服务运行状态

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "systemctl status kube-nginx |grep 'Active:'"
      done
    

    确保状态为 active (running),否则通过journalctl -u kube-nginx查看日志,确认原因。

    2.部署 containerd 组件

    containerd 实现了 kubernetes 的 Container Runtime Interface (CRI) 接口,提供容器运行时核心功能,如镜像管理、容器管理等,相比 dockerd 更加简单、健壮和可移植。

    下载和分发二进制文件

    下载二进制文件:

    cd /opt/k8s/work
    wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz 
      https://github.com/opencontainers/runc/releases/download/v1.0.0-rc10/runc.amd64 
      https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz 
      https://github.com/containerd/containerd/releases/download/v1.3.3/containerd-1.3.3.linux-amd64.tar.gz
    

    解压:

    cd /opt/k8s/work
    mkdir containerd
    tar -xvf containerd-1.3.3.linux-amd64.tar.gz -C containerd
    tar -xvf crictl-v1.17.0-linux-amd64.tar.gz
    
    mkdir cni-plugins
    sudo tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz -C cni-plugins
    
    sudo mv runc.amd64 runc
    

    分发二进制文件到所有 worker 节点:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp containerd/bin/*  crictl  cni-plugins/*  runc  root@${node_ip}:/opt/k8s/bin
        ssh root@${node_ip} "chmod a+x /opt/k8s/bin/* && mkdir -p /etc/cni/net.d"
      done
    

    创建和分发 containerd 配置文件

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    cat << EOF | sudo tee containerd-config.toml
    version = 2
    root = "${CONTAINERD_DIR}/root"
    state = "${CONTAINERD_DIR}/state"
    
    [plugins]
      [plugins."io.containerd.grpc.v1.cri"]
        sandbox_image = "registry.cn-beijing.aliyuncs.com/images_k8s/pause-amd64:3.1"
        [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/k8s/bin"
          conf_dir = "/etc/cni/net.d"
      [plugins."io.containerd.runtime.v1.linux"]
        shim = "containerd-shim"
        runtime = "runc"
        runtime_root = ""
        no_shim = false
        shim_debug = false
    EOF
    
    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p /etc/containerd/ ${CONTAINERD_DIR}/{root,state}"
        scp containerd-config.toml root@${node_ip}:/etc/containerd/config.toml
      done
    

    创建 containerd systemd unit 文件

    cd /opt/k8s/work
    cat <<EOF | sudo tee containerd.service
    [Unit]
    Description=containerd container runtime
    Documentation=https://containerd.io
    After=network.target
    
    [Service]
    Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
    ExecStartPre=/sbin/modprobe overlay
    ExecStart=/opt/k8s/bin/containerd
    Restart=always
    RestartSec=5
    Delegate=yes
    KillMode=process
    OOMScoreAdjust=-999
    LimitNOFILE=1048576
    LimitNPROC=infinity
    LimitCORE=infinity
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    分发 systemd unit 文件,启动 containerd 服务

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp containerd.service root@${node_ip}:/etc/systemd/system
        ssh root@${node_ip} "systemctl enable containerd && systemctl restart containerd"
      done
    

    创建和分发 crictl 配置文件

    crictl 是兼容 CRI 容器运行时的命令行工具,提供类似于 docker 命令的功能。具体参考官方文档

    cd /opt/k8s/work
    cat << EOF | sudo tee crictl.yaml
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    EOF
    

    分发到所有 worker 节点:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp crictl.yaml root@${node_ip}:/etc/crictl.yaml
      done
    

    3.部署 kubelet 组件

    kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

    kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

    为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

    下载和分发 kubelet 二进制文件

    参考 【k8s部署】5. 部署 master 节点 中的 《下载二进制文件》 一节。

    创建 kubelet bootstrap kubeconfig 文件

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_name in ${NODE_NAMES[@]}
      do
        echo ">>> ${node_name}"
    
        # 创建 token
        export BOOTSTRAP_TOKEN=$(kubeadm token create 
          --description kubelet-bootstrap-token 
          --groups system:bootstrappers:${node_name} 
          --kubeconfig ~/.kube/config)
    
        # 设置集群参数
        kubectl config set-cluster kubernetes 
          --certificate-authority=/etc/kubernetes/cert/ca.pem 
          --embed-certs=true 
          --server=${KUBE_APISERVER} 
          --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    
        # 设置客户端认证参数
        kubectl config set-credentials kubelet-bootstrap 
          --token=${BOOTSTRAP_TOKEN} 
          --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    
        # 设置上下文参数
        kubectl config set-context default 
          --cluster=kubernetes 
          --user=kubelet-bootstrap 
          --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    
        # 设置默认上下文
        kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
      done
    
    • 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书;

    查看 kubeadm 为各节点创建的 token:

    $ kubeadm token list --kubeconfig ~/.kube/config
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
    3uk8cy.1yeeawz00uxr2r01   23h       2020-05-31T15:05:58+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:zhaoyixin-k8s-03
    udg7tq.qh9dksbq0u0jxjat   23h       2020-05-31T15:05:55+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:zhaoyixin-k8s-01
    vl120m.v8a8hdecwkpo4cyn   23h       2020-05-31T15:05:57+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:zhaoyixin-k8s-02
    
    • token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
    • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding;

    分发 bootstrap kubeconfig 文件到所有 worker 节点

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_name in ${NODE_NAMES[@]}
      do
        echo ">>> ${node_name}"
        scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
      done
    

    创建和分发 kubelet 参数配置文件

    创建 kubelet 参数配置文件模板(可配置项参考 代码中注释):

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    cat > kubelet-config.yaml.template <<EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: "##NODE_IP##"
    staticPodPath: ""
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    authentication:
      anonymous:
        enabled: false
      webhook:
        enabled: true
      x509:
        clientCAFile: "/etc/kubernetes/cert/ca.pem"
    authorization:
      mode: Webhook
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: "##NODE_IP##"
    clusterDomain: "${CLUSTER_DNS_DOMAIN}"
    clusterDNS:
      - "${CLUSTER_DNS_SVC_IP}"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "${CLUSTER_CIDR}"
    podPidsLimit: -1
    resolvConf: /etc/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
      memory.available:  "100Mi"
      nodefs.available:  "10%"
      nodefs.inodesFree: "5%"
      imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    EOF
    
    • address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
    • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
    • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
    • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
    • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
    • 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
    • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
    • featureGates.RotateKubeletClientCertificatefeatureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数;
    • 需要 root 账户运行;

    为各节点创建和分发 kubelet 配置文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do 
        echo ">>> ${node_ip}"
        sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
        scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
      done
    

    创建和分发 kubelet systemd unit 文件

    创建 kubelet systemd unit 文件模板:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    cat > kubelet.service.template <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=containerd.service
    Requires=containerd.service
    
    [Service]
    WorkingDirectory=${K8S_DIR}/kubelet
    ExecStart=/opt/k8s/bin/kubelet \
      --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
      --cert-dir=/etc/kubernetes/cert \
      --network-plugin=cni \
      --cni-conf-dir=/etc/cni/net.d \
      --container-runtime=remote \
      --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
      --root-dir=${K8S_DIR}/kubelet \
      --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
      --config=/etc/kubernetes/kubelet-config.yaml \
      --hostname-override=##NODE_NAME## \
      --image-pull-progress-deadline=15m \
      --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \
      --logtostderr=true \
      --v=2
    Restart=always
    RestartSec=5
    StartLimitInterval=0
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
    • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
    • K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
    • --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸;

    为各节点创建和分发 kubelet systemd unit 文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_name in ${NODE_NAMES[@]}
      do 
        echo ">>> ${node_name}"
        sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
        scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
      done
    

    授予 kube-apiserver 访问 kubelet API 的权限

    在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes-master)访问 kubelet API 的权限:

    kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master
    

    Bootstrap Token Auth 和授予权限

    kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。

    kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth

    默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败。

    解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

    kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
    

    自动 approve CSR 请求,生成 kubelet client 证书

    kubelet 创建 CSR 请求后,下一步需要创建被 approve,有两种方式:

    1. kube-controller-manager 自动 aprrove;
    2. 手动使用命令 ·kubectl certificate approve·;

    CSR 被 approve 后,kubelet 向 kube-controller-manager 请求创建 client 证书,kube-controller-manager 中的 csrapproving controller 使用 SubjectAccessReview API 来检查 kubelet 请求(对应的 group 是 system:bootstrappers)是否具有相应的权限。

    创建三个 ClusterRoleBinding,分别授予 group system:bootstrappers 和 group system:nodes 进行 approve client、renew client、renew server 证书的权限(server csr 是手动 approve 的,见后文):

    cd /opt/k8s/work
    cat > csr-crb.yaml <<EOF
     # Approve all CSRs for the group "system:bootstrappers"
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
       name: auto-approve-csrs-for-group
     subjects:
     - kind: Group
       name: system:bootstrappers
       apiGroup: rbac.authorization.k8s.io
     roleRef:
       kind: ClusterRole
       name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
       apiGroup: rbac.authorization.k8s.io
    ---
     # To let a node of the group "system:nodes" renew its own credentials
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
       name: node-client-cert-renewal
     subjects:
     - kind: Group
       name: system:nodes
       apiGroup: rbac.authorization.k8s.io
     roleRef:
       kind: ClusterRole
       name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
       apiGroup: rbac.authorization.k8s.io
    ---
    # A ClusterRole which instructs the CSR approver to approve a node requesting a
    # serving cert matching its client cert.
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: approve-node-server-renewal-csr
    rules:
    - apiGroups: ["certificates.k8s.io"]
      resources: ["certificatesigningrequests/selfnodeserver"]
      verbs: ["create"]
    ---
     # To let a node of the group "system:nodes" renew its own server credentials
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
       name: node-server-cert-renewal
     subjects:
     - kind: Group
       name: system:nodes
       apiGroup: rbac.authorization.k8s.io
     roleRef:
       kind: ClusterRole
       name: approve-node-server-renewal-csr
       apiGroup: rbac.authorization.k8s.io
    EOF
    kubectl apply -f csr-crb.yaml
    
    • auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
    • node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
    • node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

    启动 kubelet 服务

    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
        ssh root@${node_ip} "/usr/sbin/swapoff -a"
        ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
      done
    
    • 启动服务前必须先创建工作目录;
    • 关闭 swap 分区,否则 kubelet 会启动失败;

    kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。

    注意:kube-controller-manager 需要配置 --cluster-signing-cert-file--cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

    查看 kubelet 情况

    稍等一会,三个节点的 CSR 都被自动 approved:

    $ kubectl get csr
    NAME        AGE   REQUESTOR                      CONDITION
    csr-ct8r8   55s   system:node:zhaoyixin-k8s-02   Pending
    csr-dtm97   72s   system:bootstrap:udg7tq        Approved,Issued
    csr-hwsnh   70s   system:bootstrap:vl120m        Approved,Issued
    csr-jxml4   57s   system:node:zhaoyixin-k8s-01   Pending
    csr-tw6m6   70s   system:bootstrap:3uk8cy        Approved,Issued
    csr-xh7j6   56s   system:node:zhaoyixin-k8s-03   Pending
    
    • Pending 的 CSR 用于创建 kubelet server 证书,需要手动 approve,参考后文。

    所有节点均注册(NotReady 状态是预期的,后续安装了网络插件后就好):

    $ kubectl get node
    NAME               STATUS     ROLES    AGE   VERSION
    zhaoyixin-k8s-01   NotReady   <none>   98s   v1.16.6
    zhaoyixin-k8s-02   NotReady   <none>   96s   v1.16.6
    zhaoyixin-k8s-03   NotReady   <none>   96s   v1.16.6
    

    kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:

    $ ls -l /etc/kubernetes/kubelet.kubeconfig
    -rw------- 1 root root 2258 May 30 15:08 /etc/kubernetes/kubelet.kubeconfig
    $ ls -l /etc/kubernetes/cert/kubelet-client-*
    -rw------- 1 root root 1289 May 30 15:08 /etc/kubernetes/cert/kubelet-client-2020-05-30-15-08-26.pem
    lrwxrwxrwx 1 root root   59 May 30 15:08 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2020-05-30-15-08-26.pem
    
    • 没有自动生成 kubelet server 证书;

    手动 approve server cert csr

    基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve:

    $ kubectl get csr
    NAME        AGE     REQUESTOR                      CONDITION
    csr-ct8r8   2m23s   system:node:zhaoyixin-k8s-02   Pending
    csr-dtm97   2m40s   system:bootstrap:udg7tq        Approved,Issued
    csr-hwsnh   2m38s   system:bootstrap:vl120m        Approved,Issued
    csr-jxml4   2m25s   system:node:zhaoyixin-k8s-01   Pending
    csr-tw6m6   2m38s   system:bootstrap:3uk8cy        Approved,Issued
    csr-xh7j6   2m24s   system:node:zhaoyixin-k8s-03   Pending
    
    $ # 手动 approve
    $ kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    
    $ # 自动生成了 server 证书
    $  ls -l /etc/kubernetes/cert/kubelet-*
    -rw------- 1 root root 1289 May 30 15:08 /etc/kubernetes/cert/kubelet-client-2020-05-30-15-08-26.pem
    lrwxrwxrwx 1 root root   59 May 30 15:08 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2020-05-30-15-08-26.pem
    -rw------- 1 root root 1338 May 30 15:12 /etc/kubernetes/cert/kubelet-server-2020-05-30-15-12-23.pem
    lrwxrwxrwx 1 root root   59 May 30 15:12 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2020-05-30-15-12-23.pem
    

    kubelet api 认证和授权

    kubelet 配置了如下认证参数:

    • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
    • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
    • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

    同时配置了如下授权参数:

    • authroization.mode=Webhook:开启 RBAC 授权;

    kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:

    $ curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.16.8:10250/metrics
    Unauthorized
    
    $ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.16.8:10250/metrics
    Unauthorized
    

    通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);

    证书认证和授权

    $ # 权限不足的证书;
    $ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.16.8:10250/metrics
    Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
    
    $ # 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
    $ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.16.8:10250/metrics|head
    # HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
    # TYPE apiserver_audit_event_total counter
    apiserver_audit_event_total 0
    # HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
    # TYPE apiserver_audit_requests_rejected_total counter
    apiserver_audit_requests_rejected_total 0
    # HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
    # TYPE apiserver_client_certificate_expiration_seconds histogram
    apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
    apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
    
    • --cacert--cert--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized

    bear token 认证和授权

    创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:

    kubectl create sa kubelet-api-test
    kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
    SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
    TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
    echo ${TOKEN}
    
    $ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.16.8:10250/metrics | head
    # HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
    # TYPE apiserver_audit_event_total counter
    apiserver_audit_event_total 0
    # HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
    # TYPE apiserver_audit_requests_rejected_total counter
    apiserver_audit_requests_rejected_total 0
    # HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
    # TYPE apiserver_client_certificate_expiration_seconds histogram
    apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
    apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
    

    cadvisor 和 metrics

    cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况的服务。

    浏览器访问 kube-apiserver 的安全端口 6443 时,提示证书不被信任。这是因为 kube-apiserver 的 server 证书是我们创建的根证书 ca.pem 签名的,需要将根证书 ca.pem 导入操作系统,并设置永久信任。

    对于 windows 系统使用以下命令导入 ca.perm:

    keytool -import -v -trustcacerts -alias appmanagement -file "PATH...\ca.pem" -storepass password -keystore cacerts
    

    我们需要给浏览器生成一个 client 证书,访问 apiserver 的 6443 https 端口时使用。

    这里使用部署 kubectl 命令行工具时创建的 admin 证书、私钥和上面的 ca 证书,创建一个浏览器可以使用 PKCS#12/PFX 格式的证书:

    $ openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem
    

    将创建的 admin.pfx 导入到系统的证书中即可。

    浏览器访问 https://192.168.16.8:10250/metricshttps://192.168.16.8:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。

    注意:

    • kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务。

    客户端选择证书的原理

    1. 证书选择是在客户端和服务端 SSL/TLS 握手协商阶段商定的;
    2. 服务端如果要求客户端提供证书,则在握手时会向客户端发送一个它接受的 CA 列表;
    3. 客户端查找它的证书列表(一般是操作系统的证书,对于 Mac 为 keychain),看有没有被 CA 签名的证书,如果有,则将它们提供给用户选择(证书的私钥);
    4. 用户选择一个证书私钥,然后客户端将使用它和服务端通信;

    4.部署 kube-proxy 组件

    kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。

    下载和分发 kube-proxy 二进制文件

    参考 【k8s部署】5. 部署 master 节点 中的 《下载二进制文件》 一节。

    创建 kube-proxy 证书

    创建证书签名请求:

    cd /opt/k8s/work
    cat > kube-proxy-csr.json <<EOF
    {
      "CN": "system:kube-proxy",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "zhaoyixin"
        }
      ]
    }
    EOF
    
    • CN:指定该证书的 User 为 system:kube-proxy
    • 预定义的 RoleBinding system:node-proxier 将 User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
    • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

    生成证书和私钥:

    cd /opt/k8s/work
    cfssl gencert -ca=/opt/k8s/work/ca.pem 
      -ca-key=/opt/k8s/work/ca-key.pem 
      -config=/opt/k8s/work/ca-config.json 
      -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
    ls kube-proxy*
    

    创建和分发 kubeconfig 文件

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    kubectl config set-cluster kubernetes 
      --certificate-authority=/opt/k8s/work/ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-credentials kube-proxy 
      --client-certificate=kube-proxy.pem 
      --client-key=kube-proxy-key.pem 
      --embed-certs=true 
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kube-proxy 
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    

    分发 kubeconfig 文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_name in ${NODE_NAMES[@]}
      do
        echo ">>> ${node_name}"
        scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
      done
    

    创建 kube-proxy 配置文件

    创建 kube-proxy config 文件模板:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    cat > kube-proxy-config.yaml.template <<EOF
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
      burst: 200
      kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
      qps: 100
    bindAddress: ##NODE_IP##
    healthzBindAddress: ##NODE_IP##:10256
    metricsBindAddress: ##NODE_IP##:10249
    enableProfiling: true
    clusterCIDR: ${CLUSTER_CIDR}
    hostnameOverride: ##NODE_NAME##
    mode: "ipvs"
    portRange: ""
    iptables:
      masqueradeAll: false
    ipvs:
      scheduler: rr
      excludeCIDRs: []
    EOF
    
    • bindAddress: 监听地址;
    • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
    • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr--masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
    • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
    • mode: 使用 ipvs 模式;

    为各节点创建和分发 kube-proxy 配置文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for (( i=0; i < 3; i++ ))
      do 
        echo ">>> ${NODE_NAMES[i]}"
        sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
        scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
      done
    

    创建和分发 kube-proxy systemd unit 文件

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    cat > kube-proxy.service <<EOF
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    WorkingDirectory=${K8S_DIR}/kube-proxy
    ExecStart=/opt/k8s/bin/kube-proxy \
      --config=/etc/kubernetes/kube-proxy-config.yaml \
      --logtostderr=true \
      --v=2
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    分发 kube-proxy systemd unit 文件:

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_name in ${NODE_NAMES[@]}
      do 
        echo ">>> ${node_name}"
        scp kube-proxy.service root@${node_name}:/etc/systemd/system/
      done
    

    启动并检查 kube-proxy 服务

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
        ssh root@${node_ip} "modprobe ip_vs_rr"
        ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
      done
    

    检查启动结果

    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
      done
    

    确保状态为 active (running),否则通过 journalctl -u kube-proxy 查看日志,确认原因。

    查看监听端口

    $ sudo netstat -lnpt|grep kube-prox
    tcp        0      0 192.168.16.8:10249      0.0.0.0:*               LISTEN      11115/kube-proxy    
    tcp        0      0 192.168.16.8:10256      0.0.0.0:*               LISTEN      11115/kube-proxy 
    
    • 10249:http prometheus metrics port;
    • 10256:http healthz port;

    查看 ipvs 路由规则

    source /opt/k8s/bin/environment.sh
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
      done
    

    预期输出:

    >>> 192.168.16.8
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
      -> 192.168.16.6:6443            Masq    1      0          0         
      -> 192.168.16.8:6443            Masq    1      0          0         
      -> 192.168.16.10:6443           Masq    1      0          0         
    >>> 192.168.16.10
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
      -> 192.168.16.6:6443            Masq    1      0          0         
      -> 192.168.16.8:6443            Masq    1      0          0         
      -> 192.168.16.10:6443           Masq    1      0          0         
    >>> 192.168.16.6
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
      -> 192.168.16.6:6443            Masq    1      0          0         
      -> 192.168.16.8:6443            Masq    1      0          0         
      -> 192.168.16.10:6443           Masq    1      0          0
    

    可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口。

    5.部署 calico 网络

    kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。

    calico 使用 IPIP 或 BGP 技术(默认为 IPIP)为各节点创建一个可以互通的 Pod 网络。

    安装 calico 网络插件

    cd /opt/k8s/work
    curl https://docs.projectcalico.org/manifests/calico.yaml -O
    

    修改配置:

    $ cp calico.yaml calico.yaml.orig
    $ diff calico.yaml.orig calico.yaml
    630c630,632
    <               value: "192.168.0.0/16"
    ---
    >               value: "172.30.0.0/16"
    >             - name: IP_AUTODETECTION_METHOD
    >               value: "interface=eth.*"
    699c701
    <             path: /opt/cni/bin
    ---
    >             path: /opt/k8s/bin
    
    • 将 Pod 网段地址修改为 172.30.0.0/16,即在 【k8s部署】1. 环境准备和初始化 文中 environment.sh 环境变量文件中定义的 Pod 网段范围;
    • calico 自动探查互联网卡,如果有多快网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 eth.*(根据自己服务器的网络接口名修改);

    运行 calico 插件:

    $ kubectl apply -f  calico.yaml
    
    • calico 插件以 daemonset 方式运行在所有的 K8S 节点上。

    查看 calico 运行状态

    $ kubectl get pods -n kube-system -o wide
    NAME                                       READY   STATUS    RESTARTS   AGE   IP              NODE               NOMINATED NODE   READINESS GATES
    calico-kube-controllers-77d6cbc65f-c9s88   1/1     Running   0          22m   172.30.219.1    zhaoyixin-k8s-02   <none>           <none>
    calico-node-gf8h2                          1/1     Running   0          22m   192.168.16.10   zhaoyixin-k8s-02   <none>           <none>
    calico-node-n26rj                          1/1     Running   0          22m   192.168.16.8    zhaoyixin-k8s-01   <none>           <none>
    calico-node-nx8hz                          1/1     Running   0          22m   192.168.16.6    zhaoyixin-k8s-03   <none>           <none>
    

    使用 crictl 命令查看 calico 使用的镜像:

    $ crictl  images
    docker.io/calico/cni                                      v3.14.1             35a7136bc71a7       77.6MB
    docker.io/calico/node                                     v3.14.1             04a9b816c7535       90.6MB
    docker.io/calico/pod2daemon-flexvol                       v3.14.1             7f93af2e7e114       37.5MB
    registry.cn-beijing.aliyuncs.com/images_k8s/pause-amd64   3.1                 21a595adc69ca       326kB
    
    • 如果 crictl 输出为空或执行失败,则有可能是缺少配置文件 /etc/crictl.yaml 导致的,该文件的配置如下:
    $ cat /etc/crictl.yaml
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    

    参考

    opsnull/follow-me-install-kubernetes-cluster

  • 相关阅读:
    装饰模式Decorator
    File类
    进程之基础
    IO流
    反射之基础
    20155219 题目补做
    2017-2018-1 20155219 《信息安全系统设计基础》实验三——实时系统
    2017-2018-1 20155219 《信息安全系统设计基础》第九周学习总结
    20155219--pwd指令的简单实现
    2017-2018-1 20155219 《信息安全系统设计基础》第8周学习总结
  • 原文地址:https://www.cnblogs.com/zhaoyixin96/p/12986549.html
Copyright © 2020-2023  润新知