• Kubernetes 资源预留(一)


    Node Allocatable Resources

    Allcatable

    除了 kubelet、runtime 等 kubernetes 守护进程和用户 pod 之外,Kubernetes 节点通常还运行许多操作系统系统守护进程。 Kubernetes 假设节点中的所有可用计算资源(称为容量)都可用于用户 Pod。 实际上,系统守护进程使用大量资源,它们的可用性对于系统的稳定性至关重要。 为了解决这个问题,该提案引入了 Allocatable 的概念,它标识了用户 Pod 可用的计算资源量。 具体来说,kubelet 将提供一些旋钮来为 OS 系统守护进程和 kubernetes 守护进程保留资源

    Kubernetes 的节点可以按照 Capacity调度。默认情况下 pod 能够使用节点全部可用容量。 但这样也存在问题,因为节点自己通常运行了不少驱动 OS 和 Kubernetes 的系统守护进程。 除非为这些系统守护进程留出资源,否则它们将与 pod 争夺资源并导致节点资源短缺问题。kubelet公开了一个名为node allocatable的特性,有助于为系统守护进程预留计算资源。 Kubernetes 推荐集群管理员按照每个节点上的工作负载密度配置node allocatable

                Node Capacity
        ---------------------------
        |     kube-reserved       |
        |-------------------------|
        |     system-reserved     |
        |-------------------------|
        |    eviction-threshold   |
        |-------------------------|
        |      allocatable        |
        |   (available for pods)  |
        ---------------------------
    Node Capacity  是指Kubernetes Node所有资源总量
    Kube-reserved 是指为Kubernetes组件预留的资源量,如(Docker deamon, kubelet, kube proxy)
    System-reserved 是指为系统守护进程预留的资源量,如不受kubernetes 管理的进程、一般涵盖在/system 下的raw container的所有进程
    eviction-threshold 是指节点的驱逐策略,根据kubelet启动时注入的参数(--eviction-hard等其它)设定的阈值,如果触发该阈值,kubelet会根据Kubenetes中的Qos的等级进行驱逐Pod
    allocatable     是指节点上的Pod所完全能够分配的资源,具体公式如:[Allocatable] = [Node Capacity] - [Kube-Reserved] - [System-Reserved] - [Hard-Eviction-Threshold]

    Kubernetes 节点上的 Allocatable 被定义为 pod 可用计算资源量。调度器不会超额申请 Allocatable。 目前支持 CPU, memory 和 ephemeral-storage 这几个参数

    Kube-Reserved

    kubereserved是为kubernetes组件预留资源而设立的参数配置,设置此参数必须需要重启kubelet(在kublet启动时通过cmd-line引入),因此kubelet正常运行时无法将其更改(不排除在以后的版本fix这一缺陷实现热加载)

    --kube-reserved=cpu=500m,memory=500Mi

    早期的功能仅支持CPU & MEM资源预留,但在以后kubernetes会相继支持更多的资源限制,比如本地存储 & IO 权重以实现kubernetes node的资源分配可靠性

    System-Reserved

    最初实现,systemreserved 与 kubereserved 的功能是一样的,同样都是资源预留功能,但是二者的意义是不同的,很明显kubereserved是为kubernetes组件实现资源预留;而systemreserved是为非kubernetes组件的系统进程而预留的资源值

    Evictions Thresholds

    为了提高节点的可靠性,每当节点内存或本地存储耗尽时,kubelet 都会驱逐 pod, evictions & node allocatable可共同有助于提高节点稳定性

    自1.5版本开始,evictions是基于节点整体的capacity的使用情况,kubelet驱逐pod是基于Qos与自定义的eviction thresholds,更多的细节请参考文档

    从1.6版本开始,默认情况下,节点上的所有pod使用 cgroups 强制执行 Allocatable,pod 不能超过 Allocatable的值,cgroups强制执行 meory & cpu resource.limits

    官方范例参考

      1. Node Capacity is 32Gi, kube-reserved is 2Gi, system-reserved is 1Gi, eviction-hard is set to <100Mi
      2. For this node, the effective Node Allocatable is 28.9Gi only; i.e. if kube and system components use up all their reservation, the memory available for pods is only 28.9Gi and kubelet will evict pods once overall usage of pods crosses that threshold.
      3. If we enforce Node Allocatable (28.9Gi) via top level cgroups, then pods can never exceed 28.9Gi in which case evictions will not be performed unless kernel memory consumption is above 100Mi.(如果我们通过顶级 cgroup 强制执行节点可分配 (28.9Gi),那么 pod 永远不会超过 28.9Gi,在这种情况下,除非内核内存消耗超过 100Mi,否则不会执行驱逐)

    注意事项

    通过systemreserved为系统预留资源时,一定要谨慎操作,如果预留的资源不足以满足OS进程正常运行那么必定带来的是致命错误(OOM完犊子),但是也不是不可以使用,前提条件是你必须对自己系统的资源使用情况了如指掌,那具体的策略操作必定是神来之手
    systemd-logind ssh session资源是在/user.slice,具体的资源使用率则不会计入node,具体资源限制是在/user.slice下,理想的状况是放在SystemReserved的顶层的cgroup

    具体配置

    前提条件

    1. 启用cgroupsPerQOS,默认值是 true,即启动
    2. 配置 cgroup driver,kubelet支持在主机上使用 cgroup 驱动操作 cgroup 层次结构,可通过--cgroup-driver配置 (这里的cgroup driver一定要与Container runtimes一致)参考地址
    • cgroupfs 是默认的驱动,在主机上直接操作 cgroup 文件系统以对 cgroup 沙箱进行管理
    • systemd 是可选的驱动,使用 init 系统支持的资源的瞬时切片管理 cgroup 沙箱

     配置参数

    • --enforce-node-allocatable=[pods][,][kube-reserved][,][system-reserved]

    参数默认值是 pods

    如果--cgroups-per-qos=false, 那么该值必须空,否则kublet启动报错

    如果指定了kube-reserved & system-reserved ,那么对应的参数--kube-resrved-cgroup & --system-reserved-cgroup要必须指定,否则为此资源预留失败

    以上值pods, kube-reserved, system-reserved,如果指定了代表enable对应资源预留

    • --cgroups-per-qos=true

    启用基于 QoS 的 Cgroup 层次结构(官方原文:Enable QoS based Cgroup hierarchy: top level cgroups for QoS Classes And all Burstable and BestEffort pods are brought up under their specific top level QoS cgroup. Dynamic Kubelet Config (beta): This field should not be updated without a full node reboot. It is safest to keep this value the same as the local config. Default: true)此值为 true 时创建顶级的 QoS 和 Pod cgroup。(默认值为 true)(已弃用:在 --config 指定的配置文件中进行设置。有关更多信息,请参阅 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

    为了更好的在Node节点范围上实施Node Allocatable功能,您必须通过 --cgroups-per-qos 标志启用新的 cgroup 层次结构。 该参数是默认设置是true。启用后,kubelet 将在其管理的 cgroup 层次结构中创建所有终端用户的 Pod。

    更多详细说明,看下文cgroup的推荐配置

    在系统与之对应的配置,如下

    <root@HK-K8S-WN1 /sys/fs/cgroup/memory># pwd
    /sys/fs/cgroup/memory
    <root@HK-K8S-WN1 /sys/fs/cgroup/memory># ls -l
    total 0
    drwxr-xr-x   2 root root 0 Jul 20 22:31 assist
    -rw-r--r--   1 root root 0 Aug 14 14:37 cgroup.clone_children
    --w--w--w-   1 root root 0 Aug 14 15:37 cgroup.event_control
    -rw-r--r--   1 root root 0 Aug 14 15:37 cgroup.procs
    -r--r--r--   1 root root 0 Aug 14 15:37 cgroup.sane_behavior
    drwxr-xr-x   5 root root 0 Mar 16 14:04 kubepods.slice
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.failcnt
    --w-------   1 root root 0 Aug 14 15:37 memory.force_empty
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.failcnt
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.limit_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.max_usage_in_bytes
    -r--r--r--   1 root root 0 Aug 14 15:37 memory.kmem.slabinfo
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.failcnt
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.limit_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.max_usage_in_bytes
    -r--r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.usage_in_bytes
    -r--r--r--   1 root root 0 Aug 14 14:37 memory.kmem.usage_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.limit_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.max_usage_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.memsw.failcnt
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.memsw.limit_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.memsw.max_usage_in_bytes
    -r--r--r--   1 root root 0 Aug 14 14:37 memory.memsw.usage_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 15:37 memory.move_charge_at_immigrate
    -r--r--r--   1 root root 0 Aug 14 15:37 memory.numa_stat
    -rw-r--r--   1 root root 0 Aug 14 15:37 memory.oom_control
    ----------   1 root root 0 Aug 14 15:37 memory.pressure_level
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.soft_limit_in_bytes
    -r--r--r--   1 root root 0 Aug 14 14:37 memory.stat
    -rw-r--r--   1 root root 0 Aug 14 15:37 memory.swappiness
    -r--r--r--   1 root root 0 Aug 14 14:37 memory.usage_in_bytes
    -rw-r--r--   1 root root 0 Aug 14 14:37 memory.use_hierarchy
    -rw-r--r--   1 root root 0 Aug 14 15:37 notify_on_release
    -rw-r--r--   1 root root 0 Aug 14 15:37 release_agent
    drwxr-xr-x 111 root root 0 Mar 16 14:01 system.slice
    -rw-r--r--   1 root root 0 Aug 14 15:37 tasks
    drwxr-xr-x   2 root root 0 Mar 16 14:01 user.slice
    <root@HK-K8S-WN1 /sys/fs/cgroup/memory># cd kubepods.slice/
    <root@HK-K8S-WN1 /sys/fs/cgroup/memory/kubepods.slice># ls
    cgroup.clone_children                                   memory.failcnt                  memory.kmem.tcp.failcnt             memory.max_usage_in_bytes        memory.numa_stat            memory.usage_in_bytes
    cgroup.event_control                                    memory.force_empty              memory.kmem.tcp.limit_in_bytes      memory.memsw.failcnt             memory.oom_control          memory.use_hierarchy
    cgroup.procs                                            memory.kmem.failcnt             memory.kmem.tcp.max_usage_in_bytes  memory.memsw.limit_in_bytes      memory.pressure_level       notify_on_release
    kubepods-besteffort.slice                               memory.kmem.limit_in_bytes      memory.kmem.tcp.usage_in_bytes      memory.memsw.max_usage_in_bytes  memory.soft_limit_in_bytes  tasks
    kubepods-burstable.slice                                memory.kmem.max_usage_in_bytes  memory.kmem.usage_in_bytes          memory.memsw.usage_in_bytes      memory.stat
    kubepods-pod1d216a46_5b77_4afe_8df7_c7a3af8f737e.slice  memory.kmem.slabinfo            memory.limit_in_bytes               memory.move_charge_at_immigrate  memory.swappiness
    <root@HK-K8S-WN1 /sys/fs/cgroup/memory/kubepods.slice># 
    • --kube-reserved

    kubernetes 系统预留的资源配置,以一组 ResourceName=ResourceQuantity 格式表示。(例如:cpu=200m,memory=500Mi,ephemeral-storage=1Gi)。当前支持用于根文件系统的 CPU、内存(memory)和本地临时存储。请参阅 http://kubernetes.io/docs/user-guide/compute-resources 获取更多信息。(默认值为 none)(已弃用:在 --config 指定的配置文件中进行设置。有关更多信息,请参阅 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

    • --kube-reserved-cgroup

    通过--kube-reserved参数配置为 kubernetes 组件其保留计算资源而设置的最顶层cgroup的绝对路径,与--kube-reserved相呼应,成对配置

    Example /kube.slice

    • --system-reserved

    系统预留的资源配置,以一组 ”ResourceName=ResourceQuantity“ 的格式表示,(例如:cpu=200m,memory=500Mi,ephemeral-storage=1Gi)。目前仅支持 CPU 和内存(memory)的设置【官方原文,但是可以支持本地存储限额配置】。请参考 http://kubernetes.io/docs/user-guide/compute-resources 获取更多信息。(默认值为 ”none“)(已弃用:在 --config 指定的配置文件中进行设置。有关更多信息,请参阅 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

    • --system-reserved-cgroup

    简单来讲就是使用--system-reserved为非Kubernetes组件的系统进程而预留的一部分资源,就等于是为系统进程预留的资源配置,但是必须设置--system-reserved的最顶层的cgroup的绝对路径,(默认值为 ‘’ 空)(已弃用:在 --config 指定的配置文件中进行设置。有关更多信息,请参阅 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

    Example /system.slice

    • --eviction-hard 
    强驱逐阈值,代表如果触发此值,立即驱逐节点的Pod,具体如何驱逐Pod则根本Kubernetes定义的Qos等级(例如:memory.available < 1Gi(内存可用值小于 1 G))设置。(默认值为 imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%)(已弃用:在 --config 指定的配置文件中进行设置。有关更多信息,请参阅 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

    配置过程

    规则说明:如果--enforce-node-allocatable(enforceNodeAllocatable)指定systemReserved & kubeReserved值时,那么必须配置对应的cgroup参数(systemReservedCgroup & kubeReservedCgroup),相反如果enforceNodeAllocatable没有指定(systemReserved & kubeReserved)则不会生效,虽然从kubectl describe nodes看到的Allocatable的值减少,但是并没有真正限制Kubernetes组件及OS守护进程的资源限制,也就说OS守护进程及组件在资源请求分配时是无限的

    解释说明:为什么Kubernetes官方默认并没有限制kubeReserved & systemReserved是因为它他们资源使用很难控制,如果设置不当后果很严重直接会影响整个平台或者是OS无法正常运行,将是致命性问题,但是Kubernetes保留了evictionHard默认驱逐策略

    "evictionHard": {
          "imagefs.available": "15%",
          "memory.available": "100Mi",
          "nodefs.available": "10%",
          "nodefs.inodesFree": "5%"
        }
    • 验证参数设置关联性

    开启kubelet日志等级模式,如下

    <root@HK-K8S-WN4 /var/lib/kubelet># cat kubeadm-flags.env 
    #KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"
    # 上面--cgroup-driver=systemd 因为在新版本此方法虽然还可以使用,但是启动时会报错,需要在config.yaml配置,所以删除了此部分配置,具体配置如下文
    KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --v=5"

    配置kubeReserved(具体的配置在/var/lib/kubelet目录下,配置文件config.yaml安装之后默认生成的配置文件,当然kubelet启动的时候会指向该配置--config=/var/lib/kubelet/config.yaml),如下

    在原文件的基础上,修改或新增需要的配置,可能不理解的是为什么没有ephemeral-storage的配置,解释一下,这年头硬盘值几个钱,有必要限额吗?关键是目前硬盘不像CPU&MEM更有实际意义的限额,而且目前也不能按IO的权重来配置,还有重复上文,最好不要做systemReserved的限额,以免引火上身

    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication: anonymous: enabled:
    false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s clusterDNS: - 10.10.0.10 clusterDomain: nflow.so
    # 只开启对kube-reserved资源预留 enforceNodeAllocatable:
    - pods - kube-reserved # - system-reserved
    #
    以下仅仅是验证,是在以上enforceNodeAllocatable只开启kube-reserved时,但下文同时一配置kubeRserved & systemReserved,在kubelet启动时是否报错
    systemReserved: cpu: 200m memory: 2000Mi kubeReserved: cpu: 200m memory: 500Mi kubeReservedCgroup: /kube.slice systemReservedCgroup: /system.slice cgroupDriver: systemd maxPods: 64 cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 4m0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s rotateCertificates: true runtimeRequestTimeout: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s

    以下kubelet启动过程,挑重要的看

    Aug 15 18:55:05 HK-K8S-WN4 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372383   21987 flags.go:33] FLAG: --add-dir-header="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372440   21987 flags.go:33] FLAG: --address="0.0.0.0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372453   21987 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372463   21987 flags.go:33] FLAG: --alsologtostderr="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372468   21987 flags.go:33] FLAG: --anonymous-auth="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372475   21987 flags.go:33] FLAG: --application-metrics-count-limit="100"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372480   21987 flags.go:33] FLAG: --authentication-token-webhook="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372485   21987 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372491   21987 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372497   21987 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372503   21987 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372508   21987 flags.go:33] FLAG: --azure-container-registry-config=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372513   21987 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372519   21987 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372523   21987 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372529   21987 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372534   21987 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372539   21987 flags.go:33] FLAG: --cgroup-root=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372544   21987 flags.go:33] FLAG: --cgroups-per-qos="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372549   21987 flags.go:33] FLAG: --chaos-chance="0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372558   21987 flags.go:33] FLAG: --client-ca-file=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372563   21987 flags.go:33] FLAG: --cloud-config=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372567   21987 flags.go:33] FLAG: --cloud-provider=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372572   21987 flags.go:33] FLAG: --cluster-dns="[]"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372579   21987 flags.go:33] FLAG: --cluster-domain=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372595   21987 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372601   21987 flags.go:33] FLAG: --cni-cache-dir="/var/lib/cni/cache"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372607   21987 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372612   21987 flags.go:33] FLAG: --config="/var/lib/kubelet/config.yaml"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372618   21987 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372624   21987 flags.go:33] FLAG: --container-log-max-files="5"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372631   21987 flags.go:33] FLAG: --container-log-max-size="10Mi"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372636   21987 flags.go:33] FLAG: --container-runtime="docker"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372642   21987 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372648   21987 flags.go:33] FLAG: --containerd="/run/containerd/containerd.sock"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372654   21987 flags.go:33] FLAG: --contention-profiling="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372659   21987 flags.go:33] FLAG: --cpu-cfs-quota="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372664   21987 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372670   21987 flags.go:33] FLAG: --cpu-manager-policy="none"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372677   21987 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372683   21987 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372690   21987 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372696   21987 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372702   21987 flags.go:33] FLAG: --docker-only="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372707   21987 flags.go:33] FLAG: --docker-root="/var/lib/docker"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372713   21987 flags.go:33] FLAG: --docker-tls="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372718   21987 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372723   21987 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372728   21987 flags.go:33] FLAG: --docker-tls-key="key.pem"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372734   21987 flags.go:33] FLAG: --dynamic-config-dir=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372741   21987 flags.go:33] FLAG: --enable-cadvisor-json-endpoints="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372746   21987 flags.go:33] FLAG: --enable-controller-attach-detach="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372753   21987 flags.go:33] FLAG: --enable-debugging-handlers="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372759   21987 flags.go:33] FLAG: --enable-load-reader="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372764   21987 flags.go:33] FLAG: --enable-server="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372768   21987 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372780   21987 flags.go:33] FLAG: --event-burst="10"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372785   21987 flags.go:33] FLAG: --event-qps="5"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372790   21987 flags.go:33] FLAG: --event-storage-age-limit="default=0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372795   21987 flags.go:33] FLAG: --event-storage-event-limit="default=0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372800   21987 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372816   21987 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372821   21987 flags.go:33] FLAG: --eviction-minimum-reclaim=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372829   21987 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372834   21987 flags.go:33] FLAG: --eviction-soft=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372839   21987 flags.go:33] FLAG: --eviction-soft-grace-period=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372844   21987 flags.go:33] FLAG: --exit-on-lock-contention="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372849   21987 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372855   21987 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372861   21987 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372866   21987 flags.go:33] FLAG: --experimental-dockershim="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372871   21987 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372877   21987 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372881   21987 flags.go:33] FLAG: --experimental-mounter-path=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372885   21987 flags.go:33] FLAG: --fail-swap-on="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372890   21987 flags.go:33] FLAG: --feature-gates=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372897   21987 flags.go:33] FLAG: --file-check-frequency="20s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372903   21987 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372908   21987 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372913   21987 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372919   21987 flags.go:33] FLAG: --healthz-port="10248"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372924   21987 flags.go:33] FLAG: --help="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372930   21987 flags.go:33] FLAG: --hostname-override=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372935   21987 flags.go:33] FLAG: --housekeeping-interval="10s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372942   21987 flags.go:33] FLAG: --http-check-frequency="20s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372948   21987 flags.go:33] FLAG: --image-gc-high-threshold="85"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372953   21987 flags.go:33] FLAG: --image-gc-low-threshold="80"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372958   21987 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372964   21987 flags.go:33] FLAG: --image-service-endpoint=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372969   21987 flags.go:33] FLAG: --iptables-drop-bit="15"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372975   21987 flags.go:33] FLAG: --iptables-masquerade-bit="14"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372981   21987 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372987   21987 flags.go:33] FLAG: --kube-api-burst="10"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372992   21987 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372998   21987 flags.go:33] FLAG: --kube-api-qps="5"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373003   21987 flags.go:33] FLAG: --kube-reserved=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373009   21987 flags.go:33] FLAG: --kube-reserved-cgroup=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373014   21987 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373020   21987 flags.go:33] FLAG: --kubelet-cgroups=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373027   21987 flags.go:33] FLAG: --lock-file=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373032   21987 flags.go:33] FLAG: --log-backtrace-at=":0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373039   21987 flags.go:33] FLAG: --log-cadvisor-usage="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373044   21987 flags.go:33] FLAG: --log-dir=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373050   21987 flags.go:33] FLAG: --log-file=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373055   21987 flags.go:33] FLAG: --log-file-max-size="1800"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373060   21987 flags.go:33] FLAG: --log-flush-frequency="5s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373066   21987 flags.go:33] FLAG: --logtostderr="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373071   21987 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373078   21987 flags.go:33] FLAG: --make-iptables-util-chains="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373083   21987 flags.go:33] FLAG: --manifest-url=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373088   21987 flags.go:33] FLAG: --manifest-url-header=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373097   21987 flags.go:33] FLAG: --master-service-namespace="default"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373103   21987 flags.go:33] FLAG: --max-open-files="1000000"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373110   21987 flags.go:33] FLAG: --max-pods="110"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373116   21987 flags.go:33] FLAG: --maximum-dead-containers="-1"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373121   21987 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373128   21987 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373134   21987 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373139   21987 flags.go:33] FLAG: --network-plugin="cni"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373144   21987 flags.go:33] FLAG: --network-plugin-mtu="0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373149   21987 flags.go:33] FLAG: --node-ip=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373155   21987 flags.go:33] FLAG: --node-labels=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373163   21987 flags.go:33] FLAG: --node-status-max-images="50"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373168   21987 flags.go:33] FLAG: --node-status-update-frequency="10s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373183   21987 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373188   21987 flags.go:33] FLAG: --oom-score-adj="-999"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373194   21987 flags.go:33] FLAG: --pod-cidr=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373199   21987 flags.go:33] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.2"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373205   21987 flags.go:33] FLAG: --pod-manifest-path=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373210   21987 flags.go:33] FLAG: --pod-max-pids="-1"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373215   21987 flags.go:33] FLAG: --pods-per-core="0"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373221   21987 flags.go:33] FLAG: --port="10250"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373227   21987 flags.go:33] FLAG: --protect-kernel-defaults="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373232   21987 flags.go:33] FLAG: --provider-id=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373237   21987 flags.go:33] FLAG: --qos-reserved=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373243   21987 flags.go:33] FLAG: --read-only-port="10255"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373249   21987 flags.go:33] FLAG: --really-crash-for-testing="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373256   21987 flags.go:33] FLAG: --redirect-container-streaming="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373261   21987 flags.go:33] FLAG: --register-node="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373266   21987 flags.go:33] FLAG: --register-schedulable="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373271   21987 flags.go:33] FLAG: --register-with-taints=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373279   21987 flags.go:33] FLAG: --registry-burst="10"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373285   21987 flags.go:33] FLAG: --registry-qps="5"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373290   21987 flags.go:33] FLAG: --reserved-cpus=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373295   21987 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373300   21987 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373306   21987 flags.go:33] FLAG: --rotate-certificates="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373312   21987 flags.go:33] FLAG: --rotate-server-certificates="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373319   21987 flags.go:33] FLAG: --runonce="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373325   21987 flags.go:33] FLAG: --runtime-cgroups=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373331   21987 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373336   21987 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373342   21987 flags.go:33] FLAG: --serialize-image-pulls="true"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373347   21987 flags.go:33] FLAG: --skip-headers="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373352   21987 flags.go:33] FLAG: --skip-log-headers="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373359   21987 flags.go:33] FLAG: --stderrthreshold="2"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373364   21987 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373369   21987 flags.go:33] FLAG: --storage-driver-db="cadvisor"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373375   21987 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373380   21987 flags.go:33] FLAG: --storage-driver-password="root"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373386   21987 flags.go:33] FLAG: --storage-driver-secure="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373392   21987 flags.go:33] FLAG: --storage-driver-table="stats"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373398   21987 flags.go:33] FLAG: --storage-driver-user="root"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373403   21987 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373410   21987 flags.go:33] FLAG: --sync-frequency="1m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373415   21987 flags.go:33] FLAG: --system-cgroups=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373420   21987 flags.go:33] FLAG: --system-reserved=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373426   21987 flags.go:33] FLAG: --system-reserved-cgroup=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373431   21987 flags.go:33] FLAG: --tls-cert-file=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373436   21987 flags.go:33] FLAG: --tls-cipher-suites="[]"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373444   21987 flags.go:33] FLAG: --tls-min-version=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373449   21987 flags.go:33] FLAG: --tls-private-key-file=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373454   21987 flags.go:33] FLAG: --topology-manager-policy="none"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373459   21987 flags.go:33] FLAG: --v="5"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373465   21987 flags.go:33] FLAG: --version="false"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373474   21987 flags.go:33] FLAG: --vmodule=""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373480   21987 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373487   21987 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373542   21987 feature_gate.go:243] feature gates: &{map[]}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.376643   21987 feature_gate.go:243] feature gates: &{map[]}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.376722   21987 feature_gate.go:243] feature gates: &{map[]}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.386413   21987 mount_linux.go:178] Detected OS with systemd
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.386524   21987 server.go:272] KubeletConfiguration: config.KubeletConfiguration{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, StaticPodPath:"/etc/kubernetes/manifests", SyncFrequency:v1.Duration{Duration:60000000000}, FileCheckFrequency:v1.Duration{Duration:20000000000}, HTTPCheckFrequency:v1.Duration{Duration:20000000000}, StaticPodURL:"", StaticPodURLHeader:map[string][]string(nil), Address:"0.0.0.0", Port:10250, ReadOnlyPort:0, TLSCertFile:"/var/lib/kubelet/pki/kubelet.crt", TLSPrivateKeyFile:"/var/lib/kubelet/pki/kubelet.key", TLSCipherSuites:[]string(nil), TLSMinVersion:"", RotateCertificates:true, ServerTLSBootstrap:false, Authentication:config.KubeletAuthentication{X509:config.KubeletX509Authentication{ClientCAFile:"/etc/kubernetes/pki/ca.crt"}, Webhook:config.KubeletWebhookAuthentication{Enabled:true, CacheTTL:v1.Duration{Duration:120000000000}}, Anonymous:config.KubeletAnonymousAuthentication{Enabled:false}}, Authorization:config.KubeletAuthorization{Mode:"Webhook", Webhook:config.KubeletWebhookAuthorization{CacheAuthorizedTTL:v1.Duration{Duration:300000000000}, CacheUnauthorizedTTL:v1.Duration{Duration:30000000000}}}, RegistryPullQPS:5, RegistryBurst:10, EventRecordQPS:5, EventBurst:10, EnableDebuggingHandlers:true, EnableContentionProfiling:false, HealthzPort:10248, HealthzBindAddress:"127.0.0.1", OOMScoreAdj:-999, ClusterDomain:"nflow.so", ClusterDNS:[]string{"10.10.0.10"}, StreamingConnectionIdleTimeout:v1.Duration{Duration:14400000000000}, NodeStatusUpdateFrequency:v1.Duration{Duration:10000000000}, NodeStatusReportFrequency:v1.Duration{Duration:300000000000}, NodeLeaseDurationSeconds:40, ImageMinimumGCAge:v1.Duration{Duration:120000000000}, ImageGCHighThresholdPercent:85, ImageGCLowThresholdPercent:80, VolumeStatsAggPeriod:v1.Duration{Duration:60000000000}, KubeletCgroups:"", SystemCgroups:"", CgroupRoot:"", CgroupsPerQOS:true, CgroupDriver:"systemd", CPUManagerPolicy:"none", CPUManagerReconcilePeriod:v1.Duration{Duration:10000000000}, TopologyManagerPolicy:"none", QOSReserved:map[string]string(nil), RuntimeRequestTimeout:v1.Duration{Duration:120000000000}, HairpinMode:"promiscuous-bridge", MaxPods:64, PodCIDR:"", PodPidsLimit:-1, ResolverConfig:"/etc/resolv.conf", CPUCFSQuota:true, CPUCFSQuotaPeriod:v1.Duration{Duration:100000000}, MaxOpenFiles:1000000, ContentType:"application/vnd.kubernetes.protobuf", KubeAPIQPS:5, KubeAPIBurst:10, SerializeImagePulls:true, EvictionHard:map[string]string{"imagefs.available":"15%", "memory.available":"100Mi", "nodefs.available":"10%", "nodefs.inodesFree":"5%"}, EvictionSoft:map[string]string(nil), EvictionSoftGracePeriod:map[string]string(nil), EvictionPressureTransitionPeriod:v1.Duration{Duration:240000000000}, EvictionMaxPodGracePeriod:0, EvictionMinimumReclaim:map[string]string(nil), PodsPerCore:0, EnableControllerAttachDetach:true, ProtectKernelDefaults:false, MakeIPTablesUtilChains:true, IPTablesMasqueradeBit:14, IPTablesDropBit:15, FeatureGates:map[string]bool(nil), FailSwapOn:true, ContainerLogMaxSize:"10Mi", ContainerLogMaxFiles:5, ConfigMapAndSecretChangeDetectionStrategy:"Watch", AllowedUnsafeSysctls:[]string(nil), SystemReserved:map[string]string{"cpu":"200m", "memory":"2000Mi"}, KubeReserved:map[string]string{"cpu":"200m", "memory":"500Mi"}, SystemReservedCgroup:"/system.slice", KubeReservedCgroup:"/kube.slice", EnforceNodeAllocatable:[]string{"pods", "kube-reserved"}, ReservedSystemCPUs:"", ShowHiddenMetricsForVersion:""}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.386672   21987 server.go:417] Version: v1.18.5
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.391547   21987 feature_gate.go:243] feature gates: &{map[]}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.391880   21987 feature_gate.go:243] feature gates: &{map[]}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.392288   21987 plugins.go:100] No cloud provider specified.
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.392314   21987 server.go:537] No cloud provider specified: "" from the config file: ""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.392332   21987 server.go:838] Client rotation is on, will bootstrap in background
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430340   21987 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430451   21987 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430889   21987 server.go:865] Starting client certificate rotation.
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430909   21987 certificate_manager.go:282] Certificate rotation is enabled.
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.431619   21987 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/pki/ca.crt"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432074   21987 manager.go:146] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432271   21987 plugin.go:40] CRI-O not connected: Get http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info: dial unix /var/run/crio/crio.sock: connect: no such file or directory
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432432   21987 certificate_manager.go:553] Certificate expiration is 2022-08-15 06:58:46 +0000 UTC, rotation deadline is 2022-05-13 05:07:22.359802602 +0000 UTC
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432453   21987 certificate_manager.go:288] Waiting 6498h12m16.927353019s for next certificate rotation
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432505   21987 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.481957   21987 fs.go:125] Filesystem UUIDs: map[8770013a-4455-4a77-b023-04d04fa388c8:/dev/vda1]
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.481987   21987 fs.go:126] Filesystem partitions: map[/data/docker/containers/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7/mounts/shm:{mountpoint:/data/docker/containers/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7/mounts/shm major:0 minor:111 fsType:tmpfs blockSize:0} /data/docker/containers/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619/mounts/shm:{mountpoint:/data/docker/containers/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619/mounts/shm major:0 minor:65 fsType:tmpfs blockSize:0} /data/docker/containers/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54/mounts/shm:{mountpoint:/data/docker/containers/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54/mounts/shm major:0 minor:54 fsType:tmpfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:23 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:253 minor:1 fsType:ext4 blockSize:0} /run:{mountpoint:/run major:0 minor:25 fsType:tmpfs blockSize:0} /run/user/501:{mountpoint:/run/user/501 major:0 minor:45 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:26 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~projected/hubble-tls:{mountpoint:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~projected/hubble-tls major:0 minor:50 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/cilium-token-8lbcl:{mountpoint:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/cilium-token-8lbcl major:0 minor:49 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/clustermesh-secrets:{mountpoint:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/clustermesh-secrets major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75aa6245-eac7-46ee-9d13-7b521071074d/volumes/kubernetes.io~secret/kube-router-token-btqcq:{mountpoint:/var/lib/kubelet/pods/75aa6245-eac7-46ee-9d13-7b521071074d/volumes/kubernetes.io~secret/kube-router-token-btqcq major:0 minor:47 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/volumes/kubernetes.io~secret/csi-admin-token-r4x5b:{mountpoint:/var/lib/kubelet/pods/9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/volumes/kubernetes.io~secret/csi-admin-token-r4x5b major:0 minor:108 fsType:tmpfs blockSize:0} overlay_0-109:{mountpoint:/data/docker/overlay2/2f44e5a0ca90fd122c19d0ab3ac6ac03d2e167427f5bd1e48438acf20547c3fb/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-120:{mountpoint:/data/docker/overlay2/bd2e48aed6772c7cf9773d0d21db98089931d6e1f8fdcf45b9ae3a2e9c5d3a1b/merged major:0 minor:120 fsType:overlay blockSize:0} overlay_0-129:{mountpoint:/data/docker/overlay2/540c822eef37596e6b6872ccf20080018f3b4fe1d92e6d4a5be7badae61e9c2f/merged major:0 minor:129 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/data/docker/overlay2/4516e81acee49b86624c247cd4c59b64bebda52499bce9a8736ab6d4f04a17e4/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-63:{mountpoint:/data/docker/overlay2/ad70382b0bc0ded5ab59552be7675ebd13765c429c49db2d628723076f027de4/merged major:0 minor:63 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/data/docker/overlay2/4a6714d6f732dd7d4fdbf47578cbb8c04047631f41d97c3c1ad44389a0e35d8e/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/data/docker/overlay2/5900995ba47e551caf1fbc873a7351601c56431d9a940c72c1175fdbc74815c8/merged major:0 minor:91 fsType:overlay blockSize:0}]
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.483815   21987 manager.go:193] Machine: {NumCores:2 CpuFrequency:2499988 MemoryCapacity:4126961664 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:20191225111607875619293640639763 SystemUUID:c886c8fa-a1e6-45b5-9c9a-ed72dd7ca192 BootID:88e6fa2c-7d81-4449-bd0c-2d06877e2746 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:23 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:26 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/75aa6245-eac7-46ee-9d13-7b521071074d/volumes/kubernetes.io~secret/kube-router-token-btqcq DeviceMajor:0 DeviceMinor:47 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/cilium-token-8lbcl DeviceMajor:0 DeviceMinor:49 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/data/docker/containers/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7/mounts/shm DeviceMajor:0 DeviceMinor:111 Capacity:67108864 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/clustermesh-secrets DeviceMajor:0 DeviceMinor:48 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/data/docker/containers/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619/mounts/shm DeviceMajor:0 DeviceMinor:65 Capacity:67108864 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:overlay_0-129 DeviceMajor:0 DeviceMinor:129 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:overlay_0-120 DeviceMajor:0 DeviceMinor:120 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:25 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/dev/vda1 DeviceMajor:253 DeviceMinor:1 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/data/docker/containers/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54/mounts/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-63 DeviceMajor:0 DeviceMinor:63 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/var/lib/kubelet/pods/9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/volumes/kubernetes.io~secret/csi-admin-token-r4x5b DeviceMajor:0 DeviceMinor:108 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/run/user/501 DeviceMajor:0 DeviceMinor:45 Capacity:412696576 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~projected/hubble-tls DeviceMajor:0 DeviceMinor:50 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true}] DiskMap:map[253:0:{Name:vda Major:253 Minor:0 Size:322122547200 Scheduler:mq-deadline}] NetworkDevices:[{Name:cilium_host MacAddress:ee:6b:ad:ca:17:06 Speed:10000 Mtu:1500} {Name:cilium_net MacAddress:76:b1:f7:1a:fd:b2 Speed:10000 Mtu:1500} {Name:eth0 MacAddress:00:16:3e:02:92:2d Speed:-1 Mtu:1500} {Name:kube-bridge MacAddress:aa:31:02:a9:46:d6 Speed:-1 Mtu:1500} {Name:lxc_health MacAddress:aa:14:67:c3:fc:bc Speed:10000 Mtu:1500}] Topology:[{Id:0 Memory:4126961664 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:1048576 Type:Unified Level:2}]}] Caches:[{Size:34603008 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.501454   21987 manager.go:199] Version: {KernelVersion:5.11.1-1.el7.elrepo.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:19.03.12 DockerAPIVersion:1.40 CadvisorVersion: CadvisorRevision:}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.501551   21987 server.go:471] Sending events to api server.
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.501602   21987 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502100   21987 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502116   21987 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName:/kube.slice SystemReservedCgroupName:/system.slice ReservedSystemCPUs: EnforceNodeAllocatable:map[kube-reserved:{} pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:2097152000 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502277   21987 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502288   21987 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502296   21987 container_manager_linux.go:306] Creating device plugin manager: true
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502305   21987 manager.go:133] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502341   21987 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502395   21987 client.go:75] Connecting to docker on unix:///var/run/docker.sock
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502410   21987 client.go:92] Start docker client with request timeout=2m0s
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: W0815 18:55:05.513729   21987 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.513754   21987 docker_service.go:238] Hairpin mode set to "hairpin-veth"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.540226   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.568437   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.568464   21987 plugins.go:166] Loaded network plugin "cni"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.568513   21987 docker_service.go:253] Docker cri networking managed by cni
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.587883   21987 docker_service.go:258] Docker Info: &{ID:EFHU:36UO:VNUN:HLL2:RFUN:56FC:YVPO:Y4J6:VMVW:X2FK:675K:LKGX Containers:8 ContainersRunning:7 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-15T18:55:05.570729171+08:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.1-1.el7.elrepo.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000773b20 NCPU:2 MemTotal:4126961664 GenericResources:[] DockerRootDir:/data/docker HTTPProxy: HTTPSProxy: NoProxy: Name:HK-K8S-WN4 Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.587981   21987 docker_service.go:271] Setting cgroupDriver to systemd
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.588072   21987 kubelet.go:367] RemoteRuntimeEndpoint: "unix:///var/run/dockershim.sock", RemoteImageEndpoint: "unix:///var/run/dockershim.sock"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.588084   21987 kubelet.go:370] Starting the GRPC server for the docker CRI shim.
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.588112   21987 docker_server.go:59] Start dockershim grpc server
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.611906   21987 container_manager_linux.go:870] attempting to apply oom_score_adj of -999 to pid 1025
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.611931   21987 oom_linux.go:65] attempting to set "/proc/1025/oom_score_adj" to "-999"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.627934   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628614   21987 remote_runtime.go:51] Connecting to runtime service unix:///var/run/dockershim.sock
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628672   21987 remote_runtime.go:59] parsed scheme: ""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628683   21987 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628719   21987 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628730   21987 clientconn.go:933] ClientConn switching balancer to "pick_first"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628762   21987 remote_image.go:41] Connecting to image service unix:///var/run/dockershim.sock
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628778   21987 remote_image.go:50] parsed scheme: ""
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628783   21987 remote_image.go:50] scheme "" not registered, fallback to default scheme
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628792   21987 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628797   21987 clientconn.go:933] ClientConn switching balancer to "pick_first"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628816   21987 server.go:1072] Using root directory: /var/lib/kubelet
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628829   21987 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628847   21987 file.go:68] Watching path "/etc/kubernetes/manifests"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628859   21987 kubelet.go:317] Watching apiserver
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.629142   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000258060, {CONNECTING <nil>}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.629325   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0002581e0, {CONNECTING <nil>}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.631755   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000258060, {READY <nil>}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.631794   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0002581e0, {READY <nil>}
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.631885   21987 config.go:303] Setting pods for source file
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634401   21987 reflector.go:175] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634421   21987 reflector.go:211] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634753   21987 reflector.go:175] Starting reflector *v1.Service (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:517
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634766   21987 reflector.go:211] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:517
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634997   21987 reflector.go:175] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:526
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.635017   21987 reflector.go:211] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:526
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.637246   21987 plugins.go:64] Registering credential provider: .dockercfg
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643276   21987 config.go:303] Setting pods for source api
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643323   21987 config.go:412] Receiving a new pod "kube-router-7zjsg_kube-system(75aa6245-eac7-46ee-9d13-7b521071074d)"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643343   21987 config.go:412] Receiving a new pod "csi-plugin-nssx2_kube-system(9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5)"
    Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643354   21987 config.go:412] Receiving a new pod "cilium-44mzg_kube-system(410f3b52-3fa2-4a75-8811-bd2b4e60b1bd)"
    Aug 15 18:55:10 HK-K8S-WN4 kubelet[21987]: I0815 18:55:10.655929   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: E0815 18:55:11.773874   21987 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.773930   21987 azure_credentials.go:158] Azure config unspecified, disabling
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784323   21987 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.12, apiVersion: 1.40.0
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784559   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/aws-ebs"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784576   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/gce-pd"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784601   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/cinder"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784612   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/azure-disk"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784621   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/azure-file"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784631   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/vsphere-volume"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784648   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/empty-dir"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784657   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/git-repo"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784670   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/host-path"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784680   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/nfs"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784690   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/secret"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784699   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/iscsi"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784712   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/glusterfs"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784728   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/rbd"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784738   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/quobyte"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784750   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/cephfs"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784762   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/downward-api"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784772   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/fc"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784781   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/flocker"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784793   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/configmap"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784817   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/projected"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784844   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/portworx-volume"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784860   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/scaleio"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784871   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/local-volume"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784881   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/storageos"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784916   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/csi"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.785047   21987 server.go:1126] Started kubelet
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.785066   21987 healthz.go:120] No default health checks specified. Installing the ping handler.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.785076   21987 healthz.go:124] Installing health checkers for (/healthz): "ping"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: E0815 18:55:11.785218   21987 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.786490   21987 config.go:100] Looking for [api file], have seen map[]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.787140   21987 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"hk-k8s-wn4", UID:"hk-k8s-wn4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.787451   21987 server.go:145] Starting to listen on 0.0.0.0:10250
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.787525   21987 healthz.go:124] Installing health checkers for (/healthz): "ping","log","syncloop"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.790564   21987 server.go:393] Adding debug handlers to kubelet server.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.796426   21987 hostutil_linux.go:209] Directory /var/lib/kubelet is already on a shared mount
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.796527   21987 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.796551   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-MARK-DROP -t nat]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.797458   21987 csi_plugin.go:280] Initializing migrated drivers on CSINode
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806105   21987 volume_manager.go:263] The desired_state_of_world populator starts
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806122   21987 volume_manager.go:265] Starting Kubelet Volume Manager
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806408   21987 reflector.go:175] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:135
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806421   21987 reflector.go:211] Listing and watching *v1.CSIDriver from k8s.io/client-go/informers/factory.go:135
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806647   21987 desired_state_of_world_populator.go:139] Desired state populator starts to run
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.814855   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.816187   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-FIREWALL -t filter]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.817438   21987 kubelet.go:1287] Container garbage collection succeeded
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822136   21987 image_gc_manager.go:231] Pod kube-system/csi-plugin-nssx2, container csi-plugin uses image registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin@sha256:37aa7701b108f291acac92b554b1cf53eacc9f1302440e0ec49ec0c77535106e(sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822152   21987 image_gc_manager.go:231] Pod kube-system/csi-plugin-nssx2, container disk-driver-registrar uses image registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar@sha256:273175c272162d480d06849e09e6e3cdb0245239e3a82df6630df3bc059c6571(sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822164   21987 image_gc_manager.go:231] Pod kube-system/cilium-44mzg, container cilium-agent uses image sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67(sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822191   21987 image_gc_manager.go:231] Pod kube-system/cilium-44mzg, container clean-cilium-state uses image quay.io/cilium/cilium@sha256:97daafddef3b6180b7dbfa7f45e07c673ee50441dc271b75779a689be22b3882(sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822198   21987 image_gc_manager.go:231] Pod kube-system/kube-router-7zjsg, container kube-router uses image cloudnativelabs/kube-router@sha256:31a87823700700c6ca3271fc72b413c682f890cb1e21b223fc2fabfdcf636f2f(sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822208   21987 image_gc_manager.go:242] Adding image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822219   21987 image_gc_manager.go:247] Image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822229   21987 image_gc_manager.go:255] Setting Image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822245   21987 image_gc_manager.go:259] Image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 has size 97879110
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822252   21987 image_gc_manager.go:242] Adding image ID sha256:6d6859d1a42a2395a8eacc41c718a039210b377f922d19076ebbdd74aa047e89 to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822260   21987 image_gc_manager.go:247] Image ID sha256:6d6859d1a42a2395a8eacc41c718a039210b377f922d19076ebbdd74aa047e89 is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822268   21987 image_gc_manager.go:259] Image ID sha256:6d6859d1a42a2395a8eacc41c718a039210b377f922d19076ebbdd74aa047e89 has size 169410912
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822275   21987 image_gc_manager.go:242] Adding image ID sha256:c19ae228f0699185488b6a1c0debb9c6b79672181356ad455c9a7924a41a01bb to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822284   21987 image_gc_manager.go:247] Image ID sha256:c19ae228f0699185488b6a1c0debb9c6b79672181356ad455c9a7924a41a01bb is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822291   21987 image_gc_manager.go:259] Image ID sha256:c19ae228f0699185488b6a1c0debb9c6b79672181356ad455c9a7924a41a01bb has size 25967786
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822298   21987 image_gc_manager.go:242] Adding image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822307   21987 image_gc_manager.go:247] Image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822317   21987 image_gc_manager.go:255] Setting Image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822327   21987 image_gc_manager.go:259] Image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 has size 433944370
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822333   21987 image_gc_manager.go:242] Adding image ID sha256:d771cc9785a13659c0e9363cae1d238bb58114a3340f805564daaed2494475f8 to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822341   21987 image_gc_manager.go:247] Image ID sha256:d771cc9785a13659c0e9363cae1d238bb58114a3340f805564daaed2494475f8 is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822349   21987 image_gc_manager.go:259] Image ID sha256:d771cc9785a13659c0e9363cae1d238bb58114a3340f805564daaed2494475f8 has size 9988981
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822356   21987 image_gc_manager.go:242] Adding image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822365   21987 image_gc_manager.go:247] Image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822374   21987 image_gc_manager.go:255] Setting Image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822383   21987 image_gc_manager.go:259] Image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 has size 440410596
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822391   21987 image_gc_manager.go:242] Adding image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822400   21987 image_gc_manager.go:247] Image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822409   21987 image_gc_manager.go:255] Setting Image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822418   21987 image_gc_manager.go:259] Image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c has size 682696
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822425   21987 image_gc_manager.go:242] Adding image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a to currentImages
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822433   21987 image_gc_manager.go:247] Image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a is new
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822441   21987 image_gc_manager.go:255] Setting Image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822451   21987 image_gc_manager.go:259] Image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a has size 17057647
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.824391   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.825707   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-FIREWALL -t filter -m comment --comment block incoming localnet connections --dst 127.0.0.0/8 ! --src 127.0.0.0/8 -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.826971   21987 iptables.go:442] running iptables: iptables [-w -C OUTPUT -t filter -j KUBE-FIREWALL]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.828038   21987 iptables.go:442] running iptables: iptables [-w -C INPUT -t filter -j KUBE-FIREWALL]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830020   21987 kubelet.go:2184] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830132   21987 clientconn.go:106] parsed scheme: "unix"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830145   21987 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830236   21987 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830249   21987 clientconn.go:933] ClientConn switching balancer to "pick_first"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830412   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00021c3e0, {CONNECTING <nil>}
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830914   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00021c3e0, {READY <nil>}
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.833893   21987 factory.go:137] Registering containerd factory
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.834021   21987 factory.go:122] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info: dial unix /var/run/crio/crio.sock: connect: no such file or directory
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.834957   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-MARK-MASQ -t nat]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.838765   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-POSTROUTING -t nat]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.840624   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.842706   21987 iptables.go:442] running iptables: iptables [-w -C POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.844552   21987 kubelet_network_linux.go:136] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.844573   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846705   21987 status_manager.go:158] Starting to sync pod status with apiserver
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846733   21987 kubelet.go:1821] Starting kubelet main sync loop.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: E0815 18:55:11.846780   21987 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846806   21987 generic.go:191] GenericPLEG: Relisting
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846891   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-KUBELET-CANARY -t mangle]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.847396   21987 reflector.go:175] Starting reflector *v1beta1.RuntimeClass (0s) from k8s.io/client-go/informers/factory.go:135
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.847412   21987 reflector.go:211] Listing and watching *v1beta1.RuntimeClass from k8s.io/client-go/informers/factory.go:135
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.849229   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-KUBELET-CANARY -t nat]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.852559   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-KUBELET-CANARY -t filter]
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854368   21987 generic.go:155] GenericPLEG: 410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/edede45c07948ccf6481c412f4baa01c2bfa35e2a3112f3f1c3e4fff6bce02e4: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854387   21987 generic.go:155] GenericPLEG: 410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/96eea5f3e430c09edf3eb45335a76ddd8bae3ecb40ca3fa9ecc68079d49d37d5: non-existent -> exited
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854395   21987 generic.go:155] GenericPLEG: 410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854403   21987 generic.go:155] GenericPLEG: 75aa6245-eac7-46ee-9d13-7b521071074d/6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854411   21987 generic.go:155] GenericPLEG: 75aa6245-eac7-46ee-9d13-7b521071074d/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854420   21987 generic.go:155] GenericPLEG: 9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/2050ed38bda21ca20bc0b43a1e4bced9b91ff1822f25ac50faf276628d641aa3: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854427   21987 generic.go:155] GenericPLEG: 9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/4177ed13c65dd6d4aaa1199c071d336449095d197c827719b18b5523129b9829: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854434   21987 generic.go:155] GenericPLEG: 9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7: non-existent -> running
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.857642   21987 kuberuntime_manager.go:930] getSandboxIDByPodUID got sandbox IDs ["bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619"] for pod "cilium-44mzg_kube-system(410f3b52-3fa2-4a75-8811-bd2b4e60b1bd)"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.862818   21987 factory.go:356] Registering Docker factory
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.862838   21987 factory.go:54] Registering systemd factory
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.863114   21987 factory.go:101] Registering Raw factory
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.863403   21987 manager.go:1158] Started watching for new ooms in manager
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865549   21987 nvidia.go:53] No NVIDIA devices found.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865566   21987 factory.go:177] Factory "containerd" was unable to handle container "/"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865600   21987 factory.go:177] Factory "docker" was unable to handle container "/"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865612   21987 factory.go:166] Error trying to work out if we can handle /: / not handled by systemd handler
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865618   21987 factory.go:177] Factory "systemd" was unable to handle container "/"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865629   21987 factory.go:173] Using factory "raw" for container "/"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865922   21987 manager.go:950] Added container: "/" (aliases: [], namespace: "")
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.866152   21987 handler.go:325] Added event &{/ 2021-08-15 18:48:02.761929273 +0800 CST containerCreation {<nil>}}
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.866199   21987 manager.go:272] Starting recovery of all containers
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.871132   21987 container.go:467] Start housekeeping for container "/"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.875459   21987 generic.go:386] PLEG: Write status for cilium-44mzg/kube-system: &container.PodStatus{ID:"410f3b52-3fa2-4a75-8811-bd2b4e60b1bd", Name:"cilium-44mzg", Namespace:"kube-system", IPs:[]string{}, ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc0005828c0), (*container.ContainerStatus)(0xc0005829a0)}, SandboxStatuses:[]*v1alpha2.PodSandboxStatus{(*v1alpha2.PodSandboxStatus)(0xc00018ec00)}} (err: <nil>)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.876396   21987 kuberuntime_manager.go:930] getSandboxIDByPodUID got sandbox IDs ["e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54"] for pod "kube-router-7zjsg_kube-system(75aa6245-eac7-46ee-9d13-7b521071074d)"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878700   21987 factory.go:177] Factory "containerd" was unable to handle container "/system.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878716   21987 factory.go:177] Factory "docker" was unable to handle container "/system.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878728   21987 factory.go:166] Error trying to work out if we can handle /system.slice: /system.slice not handled by systemd handler
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878735   21987 factory.go:177] Factory "systemd" was unable to handle container "/system.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878745   21987 factory.go:170] Factory "raw" can handle container "/system.slice", but ignoring.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878757   21987 manager.go:908] ignoring container "/system.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878763   21987 factory.go:177] Factory "containerd" was unable to handle container "/system.slice/chronyd.service"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878770   21987 factory.go:177] Factory "docker" was unable to handle container "/system.slice/chronyd.service"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878778   21987 factory.go:166] Error trying to work out if we can handle /system.slice/chronyd.service: /system.slice/chronyd.service not handled by systemd handler
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878784   21987 factory.go:177] Factory "systemd" was unable to handle container "/system.slice/chronyd.service"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878793   21987 factory.go:170] Factory "raw" can handle container "/system.slice/chronyd.service", but ignoring.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878803   21987 manager.go:908] ignoring container "/system.slice/chronyd.service"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878810   21987 factory.go:177] Factory "containerd" was unable to handle container "/aegis"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878816   21987 factory.go:177] Factory "docker" was unable to handle container "/aegis"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878825   21987 factory.go:166] Error trying to work out if we can handle /aegis: /aegis not handled by systemd handler
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878830   21987 factory.go:177] Factory "systemd" was unable to handle container "/aegis"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878838   21987 factory.go:170] Factory "raw" can handle container "/aegis", but ignoring.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878847   21987 manager.go:908] ignoring container "/aegis"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878853   21987 factory.go:177] Factory "containerd" was unable to handle container "/system.slice/run-user-501.mount"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878860   21987 factory.go:177] Factory "docker" was unable to handle container "/system.slice/run-user-501.mount"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878867   21987 factory.go:170] Factory "systemd" can handle container "/system.slice/run-user-501.mount", but ignoring.
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878878   21987 manager.go:908] ignoring container "/system.slice/run-user-501.mount"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.884381   21987 factory.go:166] Error trying to work out if we can handle /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope: failed to load container: container "6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed" in namespace "k8s.io": not found
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.884396   21987 factory.go:177] Factory "containerd" was unable to handle container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.884650   21987 generic.go:386] PLEG: Write status for kube-router-7zjsg/kube-system: &container.PodStatus{ID:"75aa6245-eac7-46ee-9d13-7b521071074d", Name:"kube-router-7zjsg", Namespace:"kube-system", IPs:[]string{}, ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc000582c40)}, SandboxStatuses:[]*v1alpha2.PodSandboxStatus{(*v1alpha2.PodSandboxStatus)(0xc000f4e000)}} (err: <nil>)
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.886339   21987 factory.go:173] Using factory "docker" for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.886714   21987 kuberuntime_manager.go:930] getSandboxIDByPodUID got sandbox IDs ["7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7"] for pod "csi-plugin-nssx2_kube-system(9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5)"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.888434   21987 manager.go:950] Added container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope" (aliases: [k8s_kube-router_kube-router-7zjsg_kube-system_75aa6245-eac7-46ee-9d13-7b521071074d_0 6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed], namespace: "docker")
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889039   21987 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope 2021-08-15 07:04:08.260947258 +0000 UTC containerCreation {<nil>}}
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889071   21987 factory.go:177] Factory "containerd" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889081   21987 factory.go:177] Factory "docker" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889092   21987 factory.go:166] Error trying to work out if we can handle /kubepods.slice/kubepods-besteffort.slice: /kubepods.slice/kubepods-besteffort.slice not handled by systemd handler
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889098   21987 factory.go:177] Factory "systemd" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889107   21987 factory.go:173] Using factory "raw" for container "/kubepods.slice/kubepods-besteffort.slice"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889386   21987 manager.go:950] Added container: "/kubepods.slice/kubepods-besteffort.slice" (aliases: [], namespace: "")
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889646   21987 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice 2021-08-15 18:47:53.833242529 +0800 CST containerCreation {<nil>}}
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889670   21987 factory.go:177] Factory "containerd" was unable to handle container "/assist"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889678   21987 factory.go:177] Factory "docker" was unable to handle container "/assist"
    Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889688   21987 factory.go:166] Error trying to work out if we can handle /assist: /assist not handled by systemd handler
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.107535   21987 config.go:100] Looking for [api file], have seen map[]
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108392   21987 cpu_manager.go:184] [cpumanager] starting with none policy
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108402   21987 cpu_manager.go:185] [cpumanager] reconciling every 10s
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108418   21987 state_mem.go:36] [cpumanager] initializing new in-memory state store
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108640   21987 state_mem.go:88] [cpumanager] updated default cpuset: ""
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108652   21987 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108664   21987 state_checkpoint.go:136] [cpumanager] state checkpoint: restored state from checkpoint
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108671   21987 state_checkpoint.go:137] [cpumanager] state checkpoint: defaultCPUSet:
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108679   21987 policy_none.go:43] [cpumanager] none policy: Start
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.109957   21987 node_container_manager_linux.go:75] Attempting to enforce Node Allocatable with config: {KubeReservedCgroupName:/kube.slice SystemReservedCgroupName:/system.slice ReservedSystemCPUs: EnforceNodeAllocatable:map[kube-reserved:{} pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:2097152000 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]}
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.110066   21987 node_container_manager_linux.go:121] Enforcing kube reserved on cgroup "/kube.slice" with limits: map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.110090   21987 node_container_manager_linux.go:141] Enforcing limits on cgroup ["kube"] with 824649664312 cpu shares, 824649664304 bytes of memory, and 0 processes
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.110149   21987 cgroup_manager_linux.go:276] The Cgroup [kube] has some missing paths: [/sys/fs/cgroup/cpu,cpuacct/kube.slice /sys/fs/cgroup/systemd/kube.slice /sys/fs/cgroup/pids/kube.slice /sys/fs/cgroup/hugetlb/kube.slice /sys/fs/cgroup/cpu,cpuacct/kube.slice]
    Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: F0815 18:55:12.110207   21987 kubelet.go:1383] Failed to start ContainerManager Failed to enforce Kube Reserved Cgroup Limits on "/kube.slice": ["kube"] cgroup does not exist

    根据上文的报错信息,手动创建缺少的cgroup,如下

    cgroup_manager_linux.go:276] The Cgroup [kube] has some missing paths: [/sys/fs/cgroup/cpu,cpuacct/kube.slice /sys/fs/cgroup/systemd/kube.slice /sys/fs/cgroup/pids/kube.slice /sys/fs/cgroup/hugetlb/kube.slice /sys/fs/cgroup/cpu,cpuacct/kube.slice]

    mkdir /sys/fs/cgroup/systemd/kube.slice
    mkdir /sys/fs/cgroup/pids/kube.slice
    mkdir /sys/fs/cgroup/hugetlb/kube.slice
    mkdir /sys/fs/cgroup/cpu,cpuacct/kube.slice

    再次启动

    验证kubeRserved的资源限额,如下

    # 具体内存限额
    <root@HK-K8S-WN4 /sys/fs/cgroup/memory/kube.slice># cat memory.limit_in_bytes 524288000
    # 具体CPU限额 <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cpu.shares 204 <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cpuacct.stat user 0 system 0 <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cgroup.procs <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cpu.cfs_period_us 100000

    参考配置

    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 0s
        cacheUnauthorizedTTL: 0s
    clusterDNS:
    - 10.10.0.10
    clusterDomain: nflow.so
    enforceNodeAllocatable:
      - pods
      - kube-reserved
      - system-reserved
    systemReserved:
      cpu: 200m
      memory: 2000Mi
    kubeReserved:
      cpu: 200m
      memory: 500Mi
    kubeReservedCgroup: /kube.slice
    systemReservedCgroup: /system.slice
    evictionHard:
      memory.available: "500Mi"
      imagefs.available": "15%"
      nodefs.available": "10%"
      nodefs.inodesFree": "5%"
    evictionMinimumReclaim:
      memory.available: "300Mi"
      nodefs.available: "500Mi"
      imagefs.available: "2Gi"
    cgroupDriver: systemd
    maxPods: 64
    cpuManagerReconcilePeriod: 0s
    evictionPressureTransitionPeriod: 4m0s
    fileCheckFrequency: 0s
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 0s
    imageMinimumGCAge: 0s

     kubelet.service(这里存在一个问题,node节点重启时会清除上面创建的cgroup.slice)可以参考以下方法解决

    [Unit]
    Description=kubelet: The Kubernetes Node Agent
    Documentation=https://kubernetes.io/docs/
    Wants=network-online.target
    After=network-online.target
    
    [Service]
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpuset/system.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/hugetlb/system.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/hugetlb/system.slic
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpu,cpetacct/kube.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/systemd/kube.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/pids/kube.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/hugetlb/kube.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpu,cpuacct/kube.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpuset/kube.slice
    ExecStartPre=-/bin/mkdir /sys/fs/cgroup/memory/kube.slice
    
    ExecStart=/usr/bin/kubelet
    Restart=always
    StartLimitInterval=0
    RestartSec=10
    
    [Install]
    WantedBy=multi-user.target

    Cgroups推荐配置

    Kubernetes node的cgroup推荐配置如下

    • 所有OS的守护进程统一放在SystemReserved的顶级cgroup下

      /sys/fs/cgroup/memory/system.slice

    • Kubelet & Container Runtime统一放在KubeReserved的顶级cgroup下(如果配置kubereserved策略时,kubereservedcgroup需要在kubelet的config.yaml配置文件中指定,同时需要在节点手动创建)为什么将Container Runtime放在KubeReserved的cgroup下,官方的理由如下2点
    1. Kubernetes 节点上的Container Runtime肯定是受Kubelet来管控的(换句话意思就是,既然使用Kubernetes平台作为Container Runtime管理,那么就应该受kubelet来管控);这里还有一种情况就是手动创建的容器,所以肯定的是不受kubelet管控,那理应不属于KuebeReserved的cgroup来做资源限制
    2. KubeRserved的cgroup中的资源消耗一定是与节点上运行Pod的数量息息相关的,简单理解来说就是多则所以耗资源

    下文的cgroup的层级结构推荐使用专用的cgroups以便于为kubelet & runtime独立追踪在使用时

    / (Cgroup Root)
    .
    +..systemreserved or system.slice (Specified via `--system-reserved-cgroup`; `SystemReserved` enforced here *optionally* by kubelet)
    .        .    .tasks(sshd,udev,etc)
    .
    .
    +..podruntime or podruntime.slice (Specified via `--kube-reserved-cgroup`; `KubeReserved` enforced here *optionally* by kubelet)
    .     .
    .     +..kubelet
    .     .   .tasks(kubelet)
    .        .
    .     +..runtime
    .         .tasks(docker-engine, containerd)
    .     
    .
    +..kubepods or kubepods.slice (Node Allocatable enforced here by Kubelet)
    .     .
    .     +..PodGuaranteed
    .     .      .
    .     .      +..Container1
    .     .      .        .tasks(container processes)
    .     .      .
    .     .        +..PodOverhead
    .     .        .        .tasks(per-pod processes)
    .     .        ...
    .     .
    .     +..Burstable
    .     .      .
    .     .      +..PodBurstable
    .     .      .        .
    .     .      .        +..Container1
    .     .      .        .         .tasks(container processes)
    .     .      .        +..Container2
    .     .      .        .         .tasks(container processes)
    .     .      .        .
    .           .           .            ...
    .     .      .
    .     .      ...
    .     .
    .      .
    .     +..Besteffort
    .     .      .
    .     .      +..PodBesteffort
    .     .      .            .
    .     .      .        +..Container1
    .     .      .        .         .tasks(container processes)
    .     .      .        +..Container2
    .     .      .        .         .tasks(container processes)
    .     .      .        .
    .           .           .            ...
    .     .      .
    .      .      ...

    SystemReservedCgroup & KebeReservedCgroup 需要手动创建,(如果kubelet是给自身与docker deamon创建cgroups,那么它将会自动的创建KubeRservedCgroup未验证,官方是这样说的)

    kubepods cgroups如果不存在,kuelet会自动创建它

    1. 如果 cgroup 驱动设置为 systemd,那么 Kubelet 将通过 systemd 创建一个 kubepods.slice
    2.  默认情况下,Kubelet 会通过 cgroupfs 直接 mkdir /kubepods cgroup

    如果 Kubelet 是使用容器化管理的,那么kubelet的Container Runtime的cgroups是在KuberReservedCgroup下

  • 相关阅读:
    Robocode教程4——Robocode的游戏物理
    JAVA:获取系统中可用的字体的名字
    Robocode游戏规则
    Robocode教程2——你的第一个robo,取个好名字哦
    Robocode教程1——安装、运行、配置
    第二十四章 异常和错误处理 6异常类与模板的关系 简单
    第二十四章 异常和错误处理 5异常类的虚函数 简单
    第二十三模板 18.4算法类 简单
    解决不能通过'/tmp/mysql.sock'连到服务器 简单
    第二十四章 异常和错误处理 4创建异常类的成员函数 简单
  • 原文地址:https://www.cnblogs.com/apink/p/15138971.html
Copyright © 2020-2023  润新知