• Kubernetes1.18.5 Cilium安装


    本文档基于阿里云平台,自建Kubernetes平台部署文档,版本是v1.18.5

    Kubernetes组件安装


    各组件如下

    组件 描述
    docker  Container runtime 需要在集群中的每个节点部署
    kubelet Kubernetes集群守护组件,用来启动Pod & Container及与kubernetes-apiserver通信,及一些监控,需要在集群中的每个节点都要部署
    kubectl  用来与集群通信的命令行工具,通常在集群的Contorl-plane节点部署
    kubeadm  用来初始化集群的指令,通常安装城Control-plane节点

    版本差异说明

    Kubernetes官方建议,组件各版本尽量保持一致,如果有不一致的情况,需要参考官方的版本差异说明,如下

    具体Kubernetes组件安装略过,参考官方文档

    集群初始化


     准备条件

    1. 一个阿里云SLB
    2. 至少三台Contorl-plane节点
    3. 若干台Worker-node节点

    具体过程

    1. 创建阿里云SLB(条件该SLB需要分配一个公网地址,同时监听端口是6443,需要公网地址的原因是因为在初始化过程中报错,需要引入一个公网地址来绕过)
    2. 修改各Control-plane节点的/etc/hosts文件,如下
      # 节点1
      <root@PROD-K8S-CP1 ~># cat /etc/hosts
      ::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
      127.0.0.1    localhost    localhost.localdomain    localhost4    localhost4.localdomain4
      10.1.0.252    PROD-K8S-CP1    PROD-K8S-CP1
      10.1.0.252    apiserver.qiangyun.com
      
      # 节点2
      <root@PROD-K8S-CP2 ~># cat /etc/hosts
      ::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
      127.0.0.1    localhost    localhost.localdomain    localhost4    localhost4.localdomain4
      10.1.0.251    PROD-K8S-CP2    PROD-K8S-CP2
      10.1.0.251    apiserver.qiangyun.com
      
      # 节点3
      <root@PROD-K8S-CP3 ~># cat /etc/hosts
      ::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
      127.0.0.1    localhost    localhost.localdomain    localhost4    localhost4.localdomain4
      10.1.0.1    PROD-K8S-CP3    PROD-K8S-CP3
      10.1.0.1    apiserver.qiangyun.com
    3. 初始化
      kubeadm init 
          --skip-phases=addon/kube-proxy 
          --apiserver-cert-extra-sans=121.xxx.xxx.xxx,127.0.0.1,10.1.0.252,10.1.0.251,10.1.0.1,10.1.0.2 
          --control-plane-endpoint=apiserver.qiangyun.com 
          --apiserver-advertise-address=10.1.0.252 
          --pod-network-cidr=172.21.0.0/20 
          --service-cidr=10.12.0.0/24 
          --upload-certs 
          --kubernetes-version=v1.18.5 
          --v=5
      
      <root@PROD-K8S-CP1 ~># kubeadm init 
      >     --skip-phases=addon/kube-proxy 
      >     --apiserver-cert-extra-sans=121.xxx.xxx.xxx,127.0.0.1,10.1.0.252,10.1.0.251,10.1.0.1,10.1.0.2 
      >     --control-plane-endpoint=apiserver.qiangyun.com 
      >     --apiserver-advertise-address=10.1.0.252 
      >     --pod-network-cidr=172.21.0.0/20 
      >     --service-cidr=10.12.0.0/24 
      >     --upload-certs 
      >     --kubernetes-version=v1.18.5 
      >     --v=5
      I0823 16:02:18.611433    6956 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
      W0823 16:02:18.612369    6956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
      [init] Using Kubernetes version: v1.18.5
      [preflight] Running pre-flight checks
      I0823 16:02:18.612654    6956 checks.go:577] validating Kubernetes and kubeadm version
      I0823 16:02:18.612680    6956 checks.go:166] validating if the firewall is enabled and active
      I0823 16:02:18.675049    6956 checks.go:201] validating availability of port 6443
      I0823 16:02:18.675196    6956 checks.go:201] validating availability of port 10259
      I0823 16:02:18.675229    6956 checks.go:201] validating availability of port 10257
      I0823 16:02:18.675256    6956 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
      I0823 16:02:18.675552    6956 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
      I0823 16:02:18.675563    6956 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
      I0823 16:02:18.675574    6956 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
      I0823 16:02:18.675585    6956 checks.go:432] validating if the connectivity type is via proxy or direct
      I0823 16:02:18.675624    6956 checks.go:471] validating http connectivity to first IP address in the CIDR
      I0823 16:02:18.675651    6956 checks.go:471] validating http connectivity to first IP address in the CIDR
      I0823 16:02:18.675664    6956 checks.go:102] validating the container runtime
      I0823 16:02:18.859600    6956 checks.go:128] validating if the service is enabled and active
      I0823 16:02:18.967047    6956 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
      I0823 16:02:18.967094    6956 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
      I0823 16:02:18.967120    6956 checks.go:649] validating whether swap is enabled or not
      I0823 16:02:18.967148    6956 checks.go:376] validating the presence of executable conntrack
      I0823 16:02:18.967176    6956 checks.go:376] validating the presence of executable ip
      I0823 16:02:18.967190    6956 checks.go:376] validating the presence of executable iptables
      I0823 16:02:18.967205    6956 checks.go:376] validating the presence of executable mount
      I0823 16:02:18.967240    6956 checks.go:376] validating the presence of executable nsenter
      I0823 16:02:18.967259    6956 checks.go:376] validating the presence of executable ebtables
      I0823 16:02:18.967273    6956 checks.go:376] validating the presence of executable ethtool
      I0823 16:02:18.967287    6956 checks.go:376] validating the presence of executable socat
      I0823 16:02:18.967307    6956 checks.go:376] validating the presence of executable tc
      I0823 16:02:18.967320    6956 checks.go:376] validating the presence of executable touch
      I0823 16:02:18.967338    6956 checks.go:520] running all checks
      I0823 16:02:19.037352    6956 checks.go:406] checking whether the given node name is reachable using net.LookupHost
      I0823 16:02:19.037475    6956 checks.go:618] validating kubelet version
      I0823 16:02:19.077279    6956 checks.go:128] validating if the service is enabled and active
      I0823 16:02:19.092014    6956 checks.go:201] validating availability of port 10250
      I0823 16:02:19.092071    6956 checks.go:201] validating availability of port 2379
      I0823 16:02:19.092097    6956 checks.go:201] validating availability of port 2380
      I0823 16:02:19.092123    6956 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      I0823 16:02:19.125348    6956 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.5
      I0823 16:02:19.164350    6956 checks.go:844] pulling k8s.gcr.io/kube-controller-manager:v1.18.5
      I0823 16:02:56.306614    6956 checks.go:844] pulling k8s.gcr.io/kube-scheduler:v1.18.5
      I0823 16:03:11.013674    6956 checks.go:844] pulling k8s.gcr.io/kube-proxy:v1.18.5
      I0823 16:03:18.148748    6956 checks.go:844] pulling k8s.gcr.io/pause:3.2
      I0823 16:03:20.591560    6956 checks.go:844] pulling k8s.gcr.io/etcd:3.4.3-0
      I0823 16:03:52.643687    6956 checks.go:844] pulling k8s.gcr.io/coredns:1.6.7
      I0823 16:04:00.559856    6956 kubelet.go:64] Stopping the kubelet
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      I0823 16:04:00.686175    6956 certs.go:103] creating a new certificate authority for ca
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [prod-k8s-cp1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local apiserver.qiangyun.com] and IPs [10.12.0.1 10.1.0.252 121.40.18.75 127.0.0.1 10.1.0.252 10.1.0.251 10.1.0.1 10.1.0.2]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      I0823 16:04:01.101542    6956 certs.go:103] creating a new certificate authority for front-proxy-ca
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      I0823 16:04:02.185320    6956 certs.go:103] creating a new certificate authority for etcd-ca
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [prod-k8s-cp1 localhost] and IPs [10.1.0.252 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [prod-k8s-cp1 localhost] and IPs [10.1.0.252 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      I0823 16:04:03.236847    6956 certs.go:69] creating new public/private key files for signing service account users
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      I0823 16:04:03.348004    6956 kubeconfig.go:79] creating kubeconfig file for admin.conf
      [kubeconfig] Writing "admin.conf" kubeconfig file
      I0823 16:04:03.482899    6956 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      I0823 16:04:03.943898    6956 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      I0823 16:04:04.071250    6956 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      I0823 16:04:04.179856    6956 manifests.go:91] [control-plane] getting StaticPodSpecs
      I0823 16:04:04.180100    6956 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
      I0823 16:04:04.180107    6956 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
      I0823 16:04:04.180112    6956 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
      I0823 16:04:04.186013    6956 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      I0823 16:04:04.186031    6956 manifests.go:91] [control-plane] getting StaticPodSpecs
      W0823 16:04:04.186073    6956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
      I0823 16:04:04.186219    6956 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
      I0823 16:04:04.186225    6956 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
      I0823 16:04:04.186230    6956 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
      I0823 16:04:04.186235    6956 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
      I0823 16:04:04.186240    6956 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
      I0823 16:04:04.186851    6956 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      I0823 16:04:04.186869    6956 manifests.go:91] [control-plane] getting StaticPodSpecs
      W0823 16:04:04.186910    6956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
      I0823 16:04:04.187052    6956 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
      I0823 16:04:04.187378    6956 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      I0823 16:04:04.187871    6956 local.go:72] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
      I0823 16:04:04.187883    6956 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
      [apiclient] All control plane components are healthy after 20.001413 seconds
      I0823 16:04:24.190509    6956 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      I0823 16:04:24.199465    6956 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
      [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
      I0823 16:04:24.204090    6956 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
      I0823 16:04:24.204102    6956 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "prod-k8s-cp1" as an annotation
      [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
      [upload-certs] Using certificate key:
      7f415dbe35dcd1618686d970daa7cade1922996f4c4bdaca24d86923f98e04bf
      [mark-control-plane] Marking the node prod-k8s-cp1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
      [mark-control-plane] Marking the node prod-k8s-cp1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
      [bootstrap-token] Using token: wtq34u.sk1bwt4lsphcxdfl
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      I0823 16:04:25.239429    6956 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
      I0823 16:04:25.239785    6956 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
      I0823 16:04:25.239999    6956 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
      I0823 16:04:25.241498    6956 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
      I0823 16:04:25.244262    6956 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      I0823 16:04:25.244970    6956 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
      I0823 16:04:25.504316    6956 request.go:557] Throttling request took 184.037078ms, request: POST:https://apiserver.qiangyun.com:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
      I0823 16:04:25.704312    6956 request.go:557] Throttling request took 194.29686ms, request: POST:https://apiserver.qiangyun.com:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s
      I0823 16:04:25.905932    6956 request.go:557] Throttling request took 185.370582ms, request: POST:https://apiserver.qiangyun.com:6443/api/v1/namespaces/kube-system/services?timeout=10s
      [addons] Applied essential addon: CoreDNS
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      You can now join any number of the control-plane node running the following command on each as root:
      
        kubeadm join apiserver.qiangyun.com:6443 --token wtq34u.sk1bwt4lsphcxdfl 
          --discovery-token-ca-cert-hash sha256:99169fbe986dbba56c840f55fe10e00890ff70f9343acf4dac70954a9df9500b 
          --control-plane --certificate-key 7f415dbe35dcd1618686d970daa7cade1922996f4c4bdaca24d86923f98e04bf
      
      Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
      As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
      "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join apiserver.qiangyun.com:6443 --token wtq34u.sk1bwt4lsphcxdfl 
          --discovery-token-ca-cert-hash sha256:99169fbe986dbba56c840f55fe10e00890ff70f9343acf4dac70954a9df9500b

      # 初始化后配置

      <root@PROD-K8S-CP1 /etc/kubernetes># mkdir -p $HOME/.kube
      <root@PROD-K8S-CP1 /etc/kubernetes># cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      <root@PROD-K8S-CP1 /etc/kubernetes># chown $(id -u):$(id -g) $HOME/.kube/config
      <root@PROD-K8S-CP1 /etc/kubernetes># kubectl get nodes
      NAME STATUS ROLES AGE VERSION
      prod-k8s-cp1 NotReady master 3m5s v1.18.5

    4. 修改PROD-K8S-CP2/PROD-K8S-CP3的/etc/hosts,如下
      <root@PROD-K8S-CP2 ~># cat /etc/hosts
      ::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
      127.0.0.1    localhost    localhost.localdomain    localhost4    localhost4.localdomain4
      10.1.0.251    PROD-K8S-CP2    PROD-K8S-CP2
      #10.1.0.251    apiserver.qiangyun.com
      121.40.18.75    apiserver.qiangyun.com
    5. PROD-K8S-CP2/PROD-K8S-CP3添加到集群中
      <root@PROD-K8S-CP2 ~># kubeadm join apiserver.qiangyun.com:6443 --token wtq34u.sk1bwt4lsphcxdfl     --discovery-token-ca-cert-hash sha256:99169fbe986dbba56c840f55fe10e00890ff70f9343acf4dac70954a9df9500b     --control-plane --certificate-key 7f415dbe35dcd1618686d970daa7cade1922996f4c4bdaca24d86923f98e04bf
      [preflight] Running pre-flight checks
      [preflight] Reading configuration from the cluster...
      [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
      W0823 16:12:52.708215    7821 configset.go:76] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:bootstrap:wtq34u" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
      [preflight] Running pre-flight checks before initializing the new control plane instance
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [prod-k8s-cp2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local apiserver.qiangyun.com] and IPs [10.12.0.1 10.1.0.251 121.40.18.75 127.0.0.1 10.1.0.252 10.1.0.251 10.1.0.1 10.1.0.2]
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [prod-k8s-cp2 localhost] and IPs [10.1.0.251 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [prod-k8s-cp2 localhost] and IPs [10.1.0.251 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
      [certs] Using the existing "sa" key
      [kubeconfig] Generating kubeconfig files
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      W0823 16:15:40.233981    7821 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      W0823 16:15:40.238842    7821 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      W0823 16:15:40.240758    7821 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
      [check-etcd] Checking that the etcd cluster is healthy
      [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Starting the kubelet
      [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
      [etcd] Announced new etcd member joining to the existing etcd cluster
      [etcd] Creating static Pod manifest for "etcd"
      [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
      {"level":"warn","ts":"2021-08-23T16:15:53.543+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.1.0.251:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [mark-control-plane] Marking the node prod-k8s-cp2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
      [mark-control-plane] Marking the node prod-k8s-cp2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
      
      This node has joined the cluster and a new control plane instance was created:
      
      * Certificate signing request was sent to apiserver and approval was received.
      * The Kubelet was informed of the new secure connection details.
      * Control plane (master) label and taint were applied to the new node.
      * The Kubernetes control plane instances scaled up.
      * A new etcd member was added to the local/stacked etcd cluster.
      
      To start administering your cluster from this node, you need to run the following as a regular user:
      
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Run 'kubectl get nodes' to see this node join the cluster.
      
      <root@PROD-K8S-CP2 ~># 
      <root@PROD-K8S-CP2 ~># 
      <root@PROD-K8S-CP2 ~># mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
      <root@PROD-K8S-CP2 ~># kubectl get nodes
      NAME           STATUS     ROLES    AGE    VERSION
      prod-k8s-cp1   NotReady   master   13m    v1.18.5
      prod-k8s-cp2   NotReady   master   102s   v1.18.5
    6. 还原PROD-K8S-CP2/PROD-K8S-CP3的/etc/hosts,如下
      <root@PROD-K8S-CP3 /etc/kubernetes># cat /etc/hosts
      ::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
      127.0.0.1    localhost    localhost.localdomain    localhost4    localhost4.localdomain4
      10.1.0.1    PROD-K8S-CP3    PROD-K8S-CP3
      10.1.0.1    apiserver.qiangyun.com 
    7. 安装kube-router,如下(具体安装要求,参考官方文档
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: kube-router
        namespace: kube-system
       
      ---
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: kube-router
        namespace: kube-system
      rules:
        - apiGroups:
          - ""
          resources:
            - namespaces
            - pods
            - services
            - nodes
            - endpoints
          verbs:
            - list
            - get
            - watch
        - apiGroups:
          - "networking.k8s.io"
          resources:
            - networkpolicies
          verbs:
            - list
            - get
            - watch
        - apiGroups:
          - extensions
          resources:
            - networkpolicies
          verbs:
            - get
            - list
            - watch
       
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: kube-router
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: kube-router
      subjects:
      - kind: ServiceAccount
        name: kube-router
        namespace: kube-system
       
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        labels:
          k8s-app: kube-router
          tier: node
        name: kube-router
        namespace: kube-system
      spec:
        selector:
          matchLabels:
            k8s-app: kube-router
            tier: node
        template:
          metadata:
            labels:
              k8s-app: kube-router
              tier: node
          spec:
            priorityClassName: system-node-critical
            serviceAccountName: kube-router
            serviceAccount: kube-router
            containers:
            - name: kube-router
              image: docker.io/cloudnativelabs/kube-router
              imagePullPolicy: IfNotPresent
              args:
              - "--run-router=true"
              - "--run-firewall=false"
              - "--run-service-proxy=false"
              # 平滑重启BGP路由(默认true)
              - "--bgp-graceful-restart=true"
              # 从Pod到集群外部通信走SANT(默认true)
              - "--enable-pod-egress=false"
              # 启用CNI插件。 如果要与另一个CNI插件一起使用kube-router功能,请禁用(默认true)
              - "--enable-cni=false"
              ##########################################################
              #       """以上选项在配置Cilium时必须设置的参数"""
              ###########################################################
              #
              # 启用与具有相同ASN的节点的对等,如果禁用将仅与外部BGP对等体对等(默认为true
              - "--enable-ibgp=false"
              # enable ip in ip(默认true)
              - "--enable-overlay=false"
              # 将服务的群集IP添加到RIB,以便将其发布给Peer BGP
              #- "--advertise-cluster-ip=true"
              # 将外部服务IP添加到RIB,以便将其发布给Peer BGP
              #- "--advertise-external-ip=true"
              #- "--advertise-loadbalancer-ip=true"
              - "--metrics-port=20245"
              env:
              - name: KUBERNETES_SERVICE_HOST
                value: "apiserver.qiangyun.com"
              - name: KUBERNETES_SERVICE_PORT
                value: "6443"
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: 20244
                initialDelaySeconds: 10
                periodSeconds: 3
              resources:
                requests:
                  cpu: 250m
                  memory: 250Mi
                limits:
                  cpu: 1
                  memory: 1024Mi
              securityContext:
                privileged: true
              volumeMounts:
              - name: xtables-lock
                mountPath: /run/xtables.lock
                readOnly: false
              - mountPath: /etc/localtime
                name: localtime
            hostNetwork: true
            tolerations:
            - effect: NoSchedule
              operator: Exists
            - key: CriticalAddonsOnly
              operator: Exists
            - effect: NoExecute
              operator: Exists
            volumes:
              - name: xtables-lock
                hostPath:
                  path: /run/xtables.lock
                  type: FileOrCreate
              - hostPath:
                  path: /etc/localtime
                  type: ""
                name: localtime
    8. 执行命令
      <root@PROD-K8S-CP1 ~># kubectl apply -f kube-router.yaml 
      serviceaccount/kube-router created
      clusterrole.rbac.authorization.k8s.io/kube-router created
      clusterrolebinding.rbac.authorization.k8s.io/kube-router created
      daemonset.apps/kube-router created
      
      <root@PROD-K8S-CP1 ~># dps
      8b8080abd924    Up 1 second    k8s_kube-router_kube-router-x4p8p_kube-system_5e0ae98b-4379-468e-adb0-b8d53afb0f8c_0
      41ed7a23568b    Up 58 minutes    k8s_kube-scheduler_kube-scheduler-prod-k8s-cp1_kube-system_3415bde3e2a04810cc416f7719a3f6aa_1
      19c04a8d26b9    Up 58 minutes    k8s_kube-controller-manager_kube-controller-manager-prod-k8s-cp1_kube-system_42925a280ad1a6ea64e50a34fd744ba8_1
      a06439059dbc    Up About an hour    k8s_kube-apiserver_kube-apiserver-prod-k8s-cp1_kube-system_db90a051e75b886b7e4c824ce338d134_0
      7fd5eb50599a    Up About an hour    k8s_etcd_etcd-prod-k8s-cp1_kube-system_9adeb44bb2242e1393bcb109f3d4da5e_0
      <root@PROD-K8S-CP1 ~># docker logs -f 8b8 I0823 09:14:18.098022 1 version.go:21] Running /usr/local/bin/kube-router version v1.3.1, built on 2021-08-13T15:31:53+0000, go1.16.7 I0823 09:14:18.200175 1 metrics_controller.go:174] Starting metrics controller I0823 09:14:18.213300 1 network_routes_controller.go:1260] Could not find annotation `kube-router.io/bgp-local-addresses` on node object so BGP will listen on node IP: 10.1.0.252 address. I0823 09:14:18.271081 1 network_routes_controller.go:222] Setting MTU of kube-bridge interface to: 1500 E0823 09:14:18.271644 1 network_routes_controller.go:233] Failed to enable netfilter for bridge. Network policies and service proxy may not work: exit status 1 I0823 09:14:18.271685 1 network_routes_controller.go:249] Starting network route controller I0823 09:14:18.273840 1 network_routes_controller.go:991] Could not find BGP peer info for the node in the node annotations so skipping configuring peer. <root@PROD-K8S-CP1 ~># kubectl get pods -n kube-system -o wide | grep kube-router kube-router-fdpjj 1/1 Running 0 84s 10.1.0.251 prod-k8s-cp2 <none> <none> kube-router-s8h9q 1/1 Running 0 84s 10.1.17.229 prod-k8s-wn2 <none> <none> kube-router-t7whm 1/1 Running 0 84s 10.1.0.1 prod-k8s-cp3 <none> <none> kube-router-v68b2 1/1 Running 0 84s 10.1.17.230 prod-k8s-wn1 <none> <none> kube-router-x4p8p 1/1 Running 0 84s 10.1.0.252 prod-k8s-cp1 <none> <none>
    9. 扩展信息
      ## 扩展信息,如下
      ## gobgp命令示例 https://github.com/osrg/gobgp/blob/master/docs/sources/cli-command-syntax.md
      ## BGP Peers配置 https://github.com/cloudnativelabs/kube-router/blob/master/docs/bgp.md (默认情况下,不需要自定义指定ASN,默认为64512并且是NODE-TO-NODE MESH)
    10. 安装helm及添加cilium.repo
      <root@PROD-K8S-CP1 ~># export http_proxy=http://10.1.0.160:1087 && export https_proxy=http://10.1.0.160:1087
      <root@PROD-K8S-CP1 ~># curl -SLO https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100 12.3M  100 12.3M    0     0  3802k      0  0:00:03  0:00:03 --:--:-- 3802k

      <root@PROD-K8S-CP1 ~># tar -zxvf helm-v3.2.3-linux-amd64.tar.gz linux-amd64/ linux-amd64/README.md linux-amd64/LICENSE linux-amd64/helm

      <root@PROD-K8S-CP1 ~># mv linux-amd64/helm /usr/bin/helm <root@PROD-K8S-CP1 ~># helm repo add cilium https://helm.cilium.io/ "cilium" has been added to your repositories <root@PROD-K8S-CP1 ~># helm search repo NAME CHART VERSION APP VERSION DESCRIPTION cilium/cilium 1.10.3 1.10.3 eBPF-based Networking, Security, and Observability 
    11. 安装Cilium DSR模式
      helm install cilium cilium/cilium --version 1.9.4 
          --namespace kube-system 
          --set tunnel=disabled 
          --set autoDirectNodeRoutes=true 
          --set kubeProxyReplacement=strict 
          --set loadBalancer.mode=hybrid 
          --set nativeRoutingCIDR=172.21.0.0/20 
          --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 
          --set ipam.operator.clusterPoolIPv4MaskSize=26 
          --set k8sServiceHost=apiserver.qiangyun.com 
          --set k8sServicePort=6443
      
      <root@PROD-K8S-CP1 ~># helm install cilium cilium/cilium --version 1.9.4 
      >     --namespace kube-system 
      >     --set tunnel=disabled 
      >     --set autoDirectNodeRoutes=true 
      >     --set kubeProxyReplacement=strict 
      >     --set loadBalancer.mode=hybrid 
      >     --set nativeRoutingCIDR=172.21.0.0/20 
      >     --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 
      >     --set ipam.operator.clusterPoolIPv4MaskSize=26 
      >     --set k8sServiceHost=apiserver.qiangyun.com 
      >     --set k8sServicePort=6443
      NAME: cilium
      LAST DEPLOYED: Mon Aug 23 18:35:27 2021
      NAMESPACE: kube-system
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
      NOTES:
      You have successfully installed Cilium with Hubble.
      
      Your release version is 1.9.4.
      
      For any further help, visit https://docs.cilium.io/en/v1.9/gettinghelp
    12. 查看创建结果,如下
      <root@PROD-K8S-CP1 ~># kubectl get pods  -n kube-system -o wide | grep cilium
      cilium-5w2gs                           1/1     Running   0          2m27s   10.1.0.1      prod-k8s-cp3   <none>           <none>
      cilium-9zf99                           1/1     Running   0          2m27s   10.1.17.230   prod-k8s-wn1   <none>           <none>
      cilium-mpg9w                           1/1     Running   0          2m27s   10.1.0.252    prod-k8s-cp1   <none>           <none>
      cilium-nrh96                           1/1     Running   0          2m27s   10.1.0.251    prod-k8s-cp2   <none>           <none>
      cilium-operator-66bf5877c7-tctj5       1/1     Running   0          2m27s   10.1.17.230   prod-k8s-wn1   <none>           <none>
      cilium-operator-66bf5877c7-x7mnh       1/1     Running   0          2m27s   10.1.17.229   prod-k8s-wn2   <none>           <none>
      cilium-t2d5r                           1/1     Running   0          2m27s   10.1.17.229   prod-k8s-wn2   <none>           <none>
    13. 查看cilium-agent启动日志
      <root@PROD-K8S-CP1 ~># docker logs -f c88
      level=info msg="Skipped reading configuration file" reason="Config File "ciliumd" Not Found in "[/root]"" subsys=config
      level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
      level=info msg="Memory available for map entries (0.003% of 8065806336B): 20164515B" subsys=config
      level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
      level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
      level=info msg="  --agent-health-port='9876'" subsys=daemon
      level=info msg="  --agent-labels=''" subsys=daemon
      level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
      level=info msg="  --allow-localhost='auto'" subsys=daemon
      level=info msg="  --annotate-k8s-node='true'" subsys=daemon
      level=info msg="  --api-rate-limit='map[]'" subsys=daemon
      level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
      level=info msg="  --auto-direct-node-routes='true'" subsys=daemon
      level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
      level=info msg="  --bpf-compile-debug='false'" subsys=daemon
      level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
      level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
      level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
      level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
      level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
      level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
      level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
      level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
      level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
      level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
      level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
      level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
      level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
      level=info msg="  --bpf-root=''" subsys=daemon
      level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
      level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
      level=info msg="  --cgroup-root=''" subsys=daemon
      level=info msg="  --cluster-id=''" subsys=daemon
      level=info msg="  --cluster-name='default'" subsys=daemon
      level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
      level=info msg="  --cmdref=''" subsys=daemon
      level=info msg="  --config=''" subsys=daemon
      level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
      level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
      level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
      level=info msg="  --datapath-mode='veth'" subsys=daemon
      level=info msg="  --debug='false'" subsys=daemon
      level=info msg="  --debug-verbose=''" subsys=daemon
      level=info msg="  --device=''" subsys=daemon
      level=info msg="  --devices=''" subsys=daemon
      level=info msg="  --direct-routing-device=''" subsys=daemon
      level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
      level=info msg="  --disable-conntrack='false'" subsys=daemon
      level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
      level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
      level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
      level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
      level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
      level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
      level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
      level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
      level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
      level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
      level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
      level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
      level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
      level=info msg="  --enable-external-ips='true'" subsys=daemon
      level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
      level=info msg="  --enable-health-checking='true'" subsys=daemon
      level=info msg="  --enable-host-firewall='false'" subsys=daemon
      level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
      level=info msg="  --enable-host-port='true'" subsys=daemon
      level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
      level=info msg="  --enable-hubble='true'" subsys=daemon
      level=info msg="  --enable-identity-mark='true'" subsys=daemon
      level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
      level=info msg="  --enable-ipsec='false'" subsys=daemon
      level=info msg="  --enable-ipv4='true'" subsys=daemon
      level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
      level=info msg="  --enable-ipv6='false'" subsys=daemon
      level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
      level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
      level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
      level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
      level=info msg="  --enable-l7-proxy='true'" subsys=daemon
      level=info msg="  --enable-local-node-route='true'" subsys=daemon
      level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
      level=info msg="  --enable-monitor='true'" subsys=daemon
      level=info msg="  --enable-node-port='false'" subsys=daemon
      level=info msg="  --enable-policy='default'" subsys=daemon
      level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
      level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
      level=info msg="  --enable-session-affinity='true'" subsys=daemon
      level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
      level=info msg="  --enable-tracing='false'" subsys=daemon
      level=info msg="  --enable-well-known-identities='false'" subsys=daemon
      level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
      level=info msg="  --encrypt-interface=''" subsys=daemon
      level=info msg="  --encrypt-node='false'" subsys=daemon
      level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
      level=info msg="  --endpoint-queue-size='25'" subsys=daemon
      level=info msg="  --endpoint-status=''" subsys=daemon
      level=info msg="  --envoy-log=''" subsys=daemon
      level=info msg="  --exclude-local-address=''" subsys=daemon
      level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
      level=info msg="  --flannel-master-device=''" subsys=daemon
      level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
      level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
      level=info msg="  --gops-port='9890'" subsys=daemon
      level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
      level=info msg="  --http-403-msg=''" subsys=daemon
      level=info msg="  --http-idle-timeout='0'" subsys=daemon
      level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
      level=info msg="  --http-request-timeout='3600'" subsys=daemon
      level=info msg="  --http-retry-count='3'" subsys=daemon
      level=info msg="  --http-retry-timeout='0'" subsys=daemon
      level=info msg="  --hubble-disable-tls='false'" subsys=daemon
      level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
      level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
      level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
      level=info msg="  --hubble-metrics=''" subsys=daemon
      level=info msg="  --hubble-metrics-server=''" subsys=daemon
      level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
      level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
      level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
      level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
      level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
      level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
      level=info msg="  --install-iptables-rules='true'" subsys=daemon
      level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
      level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
      level=info msg="  --ipam='cluster-pool'" subsys=daemon
      level=info msg="  --ipsec-key-file=''" subsys=daemon
      level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
      level=info msg="  --iptables-random-fully='false'" subsys=daemon
      level=info msg="  --ipv4-node='auto'" subsys=daemon
      level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
      level=info msg="  --ipv4-range='auto'" subsys=daemon
      level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
      level=info msg="  --ipv4-service-range='auto'" subsys=daemon
      level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
      level=info msg="  --ipv6-mcast-device=''" subsys=daemon
      level=info msg="  --ipv6-node='auto'" subsys=daemon
      level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
      level=info msg="  --ipv6-range='auto'" subsys=daemon
      level=info msg="  --ipv6-service-range='auto'" subsys=daemon
      level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
      level=info msg="  --join-cluster='false'" subsys=daemon
      level=info msg="  --k8s-api-server=''" subsys=daemon
      level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
      level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
      level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
      level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
      level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
      level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
      level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
      level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
      level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
      level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
      level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
      level=info msg="  --keep-config='false'" subsys=daemon
      level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
      level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
      level=info msg="  --kvstore=''" subsys=daemon
      level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
      level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
      level=info msg="  --kvstore-opt='map[]'" subsys=daemon
      level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
      level=info msg="  --label-prefix-file=''" subsys=daemon
      level=info msg="  --labels=''" subsys=daemon
      level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
      level=info msg="  --log-driver=''" subsys=daemon
      level=info msg="  --log-opt='map[]'" subsys=daemon
      level=info msg="  --log-system-load='false'" subsys=daemon
      level=info msg="  --masquerade='true'" subsys=daemon
      level=info msg="  --max-controller-interval='0'" subsys=daemon
      level=info msg="  --metrics=''" subsys=daemon
      level=info msg="  --monitor-aggregation='medium'" subsys=daemon
      level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
      level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
      level=info msg="  --monitor-queue-size='0'" subsys=daemon
      level=info msg="  --mtu='0'" subsys=daemon
      level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
      level=info msg="  --native-routing-cidr='172.21.0.0/20'" subsys=daemon
      level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
      level=info msg="  --node-port-algorithm='random'" subsys=daemon
      level=info msg="  --node-port-bind-protection='true'" subsys=daemon
      level=info msg="  --node-port-mode='hybrid'" subsys=daemon
      level=info msg="  --node-port-range='30000,32767'" subsys=daemon
      level=info msg="  --policy-audit-mode='false'" subsys=daemon
      level=info msg="  --policy-queue-size='100'" subsys=daemon
      level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
      level=info msg="  --pprof='false'" subsys=daemon
      level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
      level=info msg="  --prefilter-device='undefined'" subsys=daemon
      level=info msg="  --prefilter-mode='native'" subsys=daemon
      level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
      level=info msg="  --prometheus-serve-addr=''" subsys=daemon
      level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
      level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
      level=info msg="  --read-cni-conf=''" subsys=daemon
      level=info msg="  --restore='true'" subsys=daemon
      level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
      level=info msg="  --single-cluster-route='false'" subsys=daemon
      level=info msg="  --skip-crd-creation='false'" subsys=daemon
      level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
      level=info msg="  --sockops-enable='false'" subsys=daemon
      level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
      level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
      level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
      level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
      level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
      level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
      level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
      level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
      level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
      level=info msg="  --trace-payloadlen='128'" subsys=daemon
      level=info msg="  --tunnel='disabled'" subsys=daemon
      level=info msg="  --version='false'" subsys=daemon
      level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
      level=info msg="     _ _ _" subsys=daemon
      level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
      level=info msg="|  _| | | | | |     |" subsys=daemon
      level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
      level=info msg="Cilium 1.9.4 07b62884c 2021-02-03T11:45:44-08:00 go version go1.15.7 linux/amd64" subsys=daemon
      level=info msg="cilium-envoy  version: 1177896bebde79915fe5f9092409bf0254084b4e/1.14.5/Modified/RELEASE/BoringSSL" subsys=daemon
      level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
      level=info msg="linking environment: OK!" subsys=linux-datapath
      level=info msg="BPF system config check: NOT OK." error="Kernel Config file not found" subsys=linux-datapath
      level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
      level=info msg="Valid label prefix configuration:" subsys=labels-filter
      level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
      level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
      level=info msg=" - :app.kubernetes.io" subsys=labels-filter
      level=info msg=" - !:io.kubernetes" subsys=labels-filter
      level=info msg=" - !:kubernetes.io" subsys=labels-filter
      level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
      level=info msg=" - !:k8s.io" subsys=labels-filter
      level=info msg=" - !:pod-template-generation" subsys=labels-filter
      level=info msg=" - !:pod-template-hash" subsys=labels-filter
      level=info msg=" - !:controller-revision-hash" subsys=labels-filter
      level=info msg=" - !:annotation.*" subsys=labels-filter
      level=info msg=" - !:etcd_node" subsys=labels-filter
      level=info msg="Auto-disabling "enable-bpf-clock-probe" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
      level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.252.0.0/16
      level=info msg="Initializing daemon" subsys=daemon
      level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
      level=info msg="Connected to apiserver" subsys=k8s
      level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.0.252 mtu=1500 subsys=mtu
      level=info msg="Trying to auto-enable "enable-node-port", "enable-external-ips", "enable-host-reachable-services", "enable-host-port", "enable-session-affinity" features" subsys=daemon
      level=info msg="Restored services from maps" failed=0 restored=0 subsys=service
      level=info msg="Reading old endpoints..." subsys=daemon
      level=info msg="No old endpoints found." subsys=daemon
      level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
      level=error msg="Command execution failed" cmd="[iptables -t mangle -n -L CILIUM_PRE_mangle]" error="exit status 1" subsys=iptables
      level=warning msg="iptables: No chain/target/match by that name." subsys=iptables
      level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
      level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
      level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
      level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
      level=info msg="Retrieved node information from cilium node" nodeName=prod-k8s-cp1 subsys=k8s
      level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
      level=info msg="Retrieved node information from cilium node" nodeName=prod-k8s-cp1 subsys=k8s
      level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
      level=info msg="Retrieved node information from cilium node" nodeName=prod-k8s-cp1 subsys=k8s
      level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.0.252 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.0.252 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master:]" nodeName=prod-k8s-cp1 subsys=k8s v4Prefix=172.21.0.128/26 v6Prefix="<nil>"
      level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
      level=info msg="Using auto-derived devices for BPF node port" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
      level=info msg="Enabling k8s event listener" subsys=k8s-watcher
      level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
      level=info msg="Removing stale endpoint interfaces" subsys=daemon
      level=info msg="Skipping kvstore configuration" subsys=daemon
      level=info msg="Initializing node addressing" subsys=daemon
      level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=172.21.0.128/26 v6Prefix="<nil>"
      level=info msg="Restoring endpoints..." subsys=daemon
      level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
      level=info msg="Addressing information:" subsys=daemon
      level=info msg="  Cluster-Name: default" subsys=daemon
      level=info msg="  Cluster-ID: 0" subsys=daemon
      level=info msg="  Local node-name: prod-k8s-cp1" subsys=daemon
      level=info msg="  Node-IPv6: <nil>" subsys=daemon
      level=info msg="  External-Node IPv4: 10.1.0.252" subsys=daemon
      level=info msg="  Internal-Node IPv4: 172.21.0.149" subsys=daemon
      level=info msg="  IPv4 allocation prefix: 172.21.0.128/26" subsys=daemon
      level=info msg="  IPv4 native routing prefix: 172.21.0.0/20" subsys=daemon
      level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
      level=info msg="  Local IPv4 addresses:" subsys=daemon
      level=info msg="  - 10.1.0.252" subsys=daemon
      level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
      level=info msg="Adding local node to cluster" node="{prod-k8s-cp1 default [{InternalIP 10.1.0.252} {CiliumInternalIP 172.21.0.149}] 172.21.0.128/26 <nil> 172.21.0.186 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master:] 6}" subsys=nodediscovery
      level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.0.149 v4Prefix=172.21.0.128/26 v4healthIP.IPv4=172.21.0.186 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
      level=info msg="Initializing identity allocator" subsys=identity-cache
      level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
      level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
      level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.0.64/26 Src: <nil> Gw: 10.1.17.230 Flags: [] Table: 0}" error="route to destination 10.1.17.230 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=info msg="Adding new proxy port rules for cilium-dns-egress:36775" proxy port name=cilium-dns-egress subsys=proxy
      level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
      level=info msg="Validating configured node address ranges" subsys=daemon
      level=info msg="Starting connection tracking garbage collector" subsys=daemon
      level=info msg="Starting IP identity watcher" subsys=ipcache
      level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
      level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
      level=info msg="Datapath signal listener running" subsys=signal
      level=info msg="Creating host endpoint" subsys=daemon
      level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
      level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2855 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2855 identityLabels="k8s:node-role.kubernetes.io/master,reserved:host" ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2855 identity=1 identityLabels="k8s:node-role.kubernetes.io/master,reserved:host" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
      level=info msg="Launching Cilium health daemon" subsys=daemon
      level=info msg="Launching Cilium health endpoint" subsys=daemon
      level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
      level=info msg="Initializing Cilium API" subsys=daemon
      level=info msg="Daemon initialization completed" bootstrapTime=6.500397391s subsys=daemon
      level=info msg="Configuring Hubble server" eventQueueSize=4096 maxFlows=4095 subsys=hubble
      level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
      level=info msg="Beginning to read perf buffer" startTime="2021-08-23 10:36:14.089494941 +0000 UTC m=+6.563405298" subsys=monitor-agent
      level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.1.0/26 Src: <nil> Gw: 10.1.17.229 Flags: [] Table: 0}" error="route to destination 10.1.17.229 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=626 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=626 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=626 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
      level=info msg="Compiled new BPF template" BPFCompilationTime=1.594668719s file-path=/var/run/cilium/state/templates/070a939417b8306206efe78af3771c70091cd452/bpf_host.o subsys=datapath-loader
      level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2855 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Compiled new BPF template" BPFCompilationTime=1.294100586s file-path=/var/run/cilium/state/templates/2cd0c716a4b47bc3fea14439429e6798308a71a1/bpf_lxc.o subsys=datapath-loader
      level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=626 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
      level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.0.64/26 Src: <nil> Gw: 10.1.17.230 Flags: [] Table: 0}" error="route to destination 10.1.17.230 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.1.0/26 Src: <nil> Gw: 10.1.17.229 Flags: [] Table: 0}" error="route to destination 10.1.17.229 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.0.64/26 Src: <nil> Gw: 10.1.17.230 Flags: [] Table: 0}" error="route to destination 10.1.17.230 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.1.0/26 Src: <nil> Gw: 10.1.17.229 Flags: [] Table: 0}" error="route to destination 10.1.17.229 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
    14. 以上日志后面为什么有报错,因为在安装时指定的模式是DSR模式,DSR模式必须要求后端服务器在一个网段中,报错是因为CP与WN不在一个网段内,解决方案可以使用endpoints路由的方式
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.0.64/26 Src: <nil> Gw: 10.1.17.230 Flags: [] Table: 0}" error="route to destination 10.1.17.230 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.1.0/26 Src: <nil> Gw: 10.1.17.229 Flags: [] Table: 0}" error="route to destination 10.1.17.229 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.0.64/26 Src: <nil> Gw: 10.1.17.230 Flags: [] Table: 0}" error="route to destination 10.1.17.230 contains gateway 10.1.0.253, must be directly reachable" subsys=linux-datapath
      level=warning msg="Unable to install direct node route {Ifindex: 0 Dst: 172.21.1.0/26 Src: <nil> Gw: 10.1.17.229 Flags: [] Table: 0}" error="route to destination 10.1.17.229 contains gateway   , must be directly reachable" subsys=linux-datapath
    15. endpointRouters 模式,如下
      helm install cilium cilium/cilium --version 1.9.4 
          --namespace kube-system 
          --set tunnel=disabled 
          --set endpointRoutes.enabled=true 
          --set kubeProxyReplacement=strict 
          --set loadBalancer.mode=hybrid
          --set nativeRoutingCIDR=172.21.0.0/20 
          --set ipam.operator.clusterPoolIPv4PodCIDR=172.21.0.0/20 
          --set ipam.operator.clusterPoolIPv4MaskSize=26 
          --set k8sServiceHost=apiserver.qiangyun.com 
          --set k8sServicePort=6443
      
      <root@PROD-K8S-CP1 ~># docker logs -f a34
      level=info msg="Skipped reading configuration file" reason="Config File "ciliumd" Not Found in "[/root]"" subsys=config
      level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
      level=info msg="Memory available for map entries (0.003% of 8065818624B): 20164546B" subsys=config
      level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
      level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
      level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
      level=info msg="  --agent-health-port='9876'" subsys=daemon
      level=info msg="  --agent-labels=''" subsys=daemon
      level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
      level=info msg="  --allow-localhost='auto'" subsys=daemon
      level=info msg="  --annotate-k8s-node='true'" subsys=daemon
      level=info msg="  --api-rate-limit='map[]'" subsys=daemon
      level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
      level=info msg="  --auto-direct-node-routes='false'" subsys=daemon
      level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
      level=info msg="  --bpf-compile-debug='false'" subsys=daemon
      level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
      level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
      level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
      level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
      level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
      level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
      level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
      level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
      level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
      level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
      level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
      level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
      level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
      level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
      level=info msg="  --bpf-root=''" subsys=daemon
      level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
      level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
      level=info msg="  --cgroup-root=''" subsys=daemon
      level=info msg="  --cluster-id=''" subsys=daemon
      level=info msg="  --cluster-name='default'" subsys=daemon
      level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
      level=info msg="  --cmdref=''" subsys=daemon
      level=info msg="  --config=''" subsys=daemon
      level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
      level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
      level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
      level=info msg="  --datapath-mode='veth'" subsys=daemon
      level=info msg="  --debug='false'" subsys=daemon
      level=info msg="  --debug-verbose=''" subsys=daemon
      level=info msg="  --device=''" subsys=daemon
      level=info msg="  --devices=''" subsys=daemon
      level=info msg="  --direct-routing-device=''" subsys=daemon
      level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
      level=info msg="  --disable-conntrack='false'" subsys=daemon
      level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
      level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
      level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
      level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
      level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
      level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
      level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
      level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
      level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
      level=info msg="  --enable-bpf-masquerade='true'" subsys=daemon
      level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
      level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
      level=info msg="  --enable-endpoint-routes='true'" subsys=daemon
      level=info msg="  --enable-external-ips='true'" subsys=daemon
      level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
      level=info msg="  --enable-health-checking='true'" subsys=daemon
      level=info msg="  --enable-host-firewall='false'" subsys=daemon
      level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
      level=info msg="  --enable-host-port='true'" subsys=daemon
      level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
      level=info msg="  --enable-hubble='true'" subsys=daemon
      level=info msg="  --enable-identity-mark='true'" subsys=daemon
      level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
      level=info msg="  --enable-ipsec='false'" subsys=daemon
      level=info msg="  --enable-ipv4='true'" subsys=daemon
      level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
      level=info msg="  --enable-ipv6='false'" subsys=daemon
      level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
      level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
      level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
      level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
      level=info msg="  --enable-l7-proxy='true'" subsys=daemon
      level=info msg="  --enable-local-node-route='true'" subsys=daemon
      level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
      level=info msg="  --enable-monitor='true'" subsys=daemon
      level=info msg="  --enable-node-port='false'" subsys=daemon
      level=info msg="  --enable-policy='default'" subsys=daemon
      level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
      level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
      level=info msg="  --enable-session-affinity='true'" subsys=daemon
      level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
      level=info msg="  --enable-tracing='false'" subsys=daemon
      level=info msg="  --enable-well-known-identities='false'" subsys=daemon
      level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
      level=info msg="  --encrypt-interface=''" subsys=daemon
      level=info msg="  --encrypt-node='false'" subsys=daemon
      level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
      level=info msg="  --endpoint-queue-size='25'" subsys=daemon
      level=info msg="  --endpoint-status=''" subsys=daemon
      level=info msg="  --envoy-log=''" subsys=daemon
      level=info msg="  --exclude-local-address=''" subsys=daemon
      level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
      level=info msg="  --flannel-master-device=''" subsys=daemon
      level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
      level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
      level=info msg="  --gops-port='9890'" subsys=daemon
      level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
      level=info msg="  --http-403-msg=''" subsys=daemon
      level=info msg="  --http-idle-timeout='0'" subsys=daemon
      level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
      level=info msg="  --http-request-timeout='3600'" subsys=daemon
      level=info msg="  --http-retry-count='3'" subsys=daemon
      level=info msg="  --http-retry-timeout='0'" subsys=daemon
      level=info msg="  --hubble-disable-tls='false'" subsys=daemon
      level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
      level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
      level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
      level=info msg="  --hubble-metrics=''" subsys=daemon
      level=info msg="  --hubble-metrics-server=''" subsys=daemon
      level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
      level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
      level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
      level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
      level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
      level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
      level=info msg="  --install-iptables-rules='true'" subsys=daemon
      level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
      level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
      level=info msg="  --ipam='cluster-pool'" subsys=daemon
      level=info msg="  --ipsec-key-file=''" subsys=daemon
      level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
      level=info msg="  --iptables-random-fully='false'" subsys=daemon
      level=info msg="  --ipv4-node='auto'" subsys=daemon
      level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
      level=info msg="  --ipv4-range='auto'" subsys=daemon
      level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
      level=info msg="  --ipv4-service-range='auto'" subsys=daemon
      level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
      level=info msg="  --ipv6-mcast-device=''" subsys=daemon
      level=info msg="  --ipv6-node='auto'" subsys=daemon
      level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
      level=info msg="  --ipv6-range='auto'" subsys=daemon
      level=info msg="  --ipv6-service-range='auto'" subsys=daemon
      level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
      level=info msg="  --join-cluster='false'" subsys=daemon
      level=info msg="  --k8s-api-server=''" subsys=daemon
      level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
      level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
      level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
      level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
      level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
      level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
      level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
      level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
      level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
      level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
      level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
      level=info msg="  --keep-config='false'" subsys=daemon
      level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
      level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
      level=info msg="  --kvstore=''" subsys=daemon
      level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
      level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
      level=info msg="  --kvstore-opt='map[]'" subsys=daemon
      level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
      level=info msg="  --label-prefix-file=''" subsys=daemon
      level=info msg="  --labels=''" subsys=daemon
      level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
      level=info msg="  --log-driver=''" subsys=daemon
      level=info msg="  --log-opt='map[]'" subsys=daemon
      level=info msg="  --log-system-load='false'" subsys=daemon
      level=info msg="  --masquerade='true'" subsys=daemon
      level=info msg="  --max-controller-interval='0'" subsys=daemon
      level=info msg="  --metrics=''" subsys=daemon
      level=info msg="  --monitor-aggregation='medium'" subsys=daemon
      level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
      level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
      level=info msg="  --monitor-queue-size='0'" subsys=daemon
      level=info msg="  --mtu='0'" subsys=daemon
      level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
      level=info msg="  --native-routing-cidr='172.21.0.0/20'" subsys=daemon
      level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
      level=info msg="  --node-port-algorithm='random'" subsys=daemon
      level=info msg="  --node-port-bind-protection='true'" subsys=daemon
      level=info msg="  --node-port-mode='hybrid'" subsys=daemon
      level=info msg="  --node-port-range='30000,32767'" subsys=daemon
      level=info msg="  --policy-audit-mode='false'" subsys=daemon
      level=info msg="  --policy-queue-size='100'" subsys=daemon
      level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
      level=info msg="  --pprof='false'" subsys=daemon
      level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
      level=info msg="  --prefilter-device='undefined'" subsys=daemon
      level=info msg="  --prefilter-mode='native'" subsys=daemon
      level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
      level=info msg="  --prometheus-serve-addr=''" subsys=daemon
      level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
      level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
      level=info msg="  --read-cni-conf=''" subsys=daemon
      level=info msg="  --restore='true'" subsys=daemon
      level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
      level=info msg="  --single-cluster-route='false'" subsys=daemon
      level=info msg="  --skip-crd-creation='false'" subsys=daemon
      level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
      level=info msg="  --sockops-enable='false'" subsys=daemon
      level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
      level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
      level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
      level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
      level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
      level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
      level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
      level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
      level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
      level=info msg="  --trace-payloadlen='128'" subsys=daemon
      level=info msg="  --tunnel='disabled'" subsys=daemon
      level=info msg="  --version='false'" subsys=daemon
      level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
      level=info msg="     _ _ _" subsys=daemon
      level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
      level=info msg="|  _| | | | | |     |" subsys=daemon
      level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
      level=info msg="Cilium 1.9.4 07b62884c 2021-02-03T11:45:44-08:00 go version go1.15.7 linux/amd64" subsys=daemon
      level=info msg="cilium-envoy  version: 1177896bebde79915fe5f9092409bf0254084b4e/1.14.5/Modified/RELEASE/BoringSSL" subsys=daemon
      level=info msg="clang (10.0.0) and kernel (5.11.1) versions: OK!" subsys=linux-datapath
      level=info msg="linking environment: OK!" subsys=linux-datapath
      level=info msg="BPF system config check: NOT OK." error="Kernel Config file not found" subsys=linux-datapath
      level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
      level=info msg="Valid label prefix configuration:" subsys=labels-filter
      level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
      level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
      level=info msg=" - :app.kubernetes.io" subsys=labels-filter
      level=info msg=" - !:io.kubernetes" subsys=labels-filter
      level=info msg=" - !:kubernetes.io" subsys=labels-filter
      level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
      level=info msg=" - !:k8s.io" subsys=labels-filter
      level=info msg=" - !:pod-template-generation" subsys=labels-filter
      level=info msg=" - !:pod-template-hash" subsys=labels-filter
      level=info msg=" - !:controller-revision-hash" subsys=labels-filter
      level=info msg=" - !:annotation.*" subsys=labels-filter
      level=info msg=" - !:etcd_node" subsys=labels-filter
      level=info msg="Auto-disabling "enable-bpf-clock-probe" feature since KERNEL_HZ cannot be determined" error="Cannot probe CONFIG_HZ" subsys=daemon
      level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.252.0.0/16
      level=info msg="Initializing daemon" subsys=daemon
      level=info msg="Establishing connection to apiserver" host="https://apiserver.qiangyun.com:6443" subsys=k8s
      level=info msg="Connected to apiserver" subsys=k8s
      level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.1.0.252 mtu=1500 subsys=mtu
      level=info msg="Trying to auto-enable "enable-node-port", "enable-external-ips", "enable-host-reachable-services", "enable-host-port", "enable-session-affinity" features" subsys=daemon
      level=info msg="BPF host routing is incompatible with enable-endpoint-routes. Falling back to legacy host routing (enable-host-legacy-routing=true)." subsys=daemon
      level=info msg="Restored services from maps" failed=0 restored=3 subsys=service
      level=info msg="Reading old endpoints..." subsys=daemon
      level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
      level=info msg="No old endpoints found." subsys=daemon
      level=error msg="Command execution failed" cmd="[iptables -t mangle -n -L CILIUM_PRE_mangle]" error="exit status 1" subsys=iptables
      level=warning msg="iptables: No chain/target/match by that name." subsys=iptables
      level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
      level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
      level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
      level=info msg="Retrieved node information from cilium node" nodeName=prod-k8s-cp1 subsys=k8s
      level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.0.252 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.0.252 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master:]" nodeName=prod-k8s-cp1 subsys=k8s v4Prefix=172.21.0.128/26 v6Prefix="<nil>"
      level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
      level=info msg="Using auto-derived devices for BPF node port" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
      level=info msg="Enabling k8s event listener" subsys=k8s-watcher
      level=info msg="Removing stale endpoint interfaces" subsys=daemon
      level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
      level=info msg="Skipping kvstore configuration" subsys=daemon
      level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=172.21.0.155 ipv6="<nil>" subsys=node
      level=info msg="Initializing node addressing" subsys=daemon
      level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=172.21.0.128/26 v6Prefix="<nil>"
      level=info msg="Restoring endpoints..." subsys=daemon
      level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
      level=info msg="Addressing information:" subsys=daemon
      level=info msg="  Cluster-Name: default" subsys=daemon
      level=info msg="  Cluster-ID: 0" subsys=daemon
      level=info msg="  Local node-name: prod-k8s-cp1" subsys=daemon
      level=info msg="  Node-IPv6: <nil>" subsys=daemon
      level=info msg="  External-Node IPv4: 10.1.0.252" subsys=daemon
      level=info msg="  Internal-Node IPv4: 172.21.0.155" subsys=daemon
      level=info msg="  IPv4 allocation prefix: 172.21.0.128/26" subsys=daemon
      level=info msg="  IPv4 native routing prefix: 172.21.0.0/20" subsys=daemon
      level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
      level=info msg="  Local IPv4 addresses:" subsys=daemon
      level=info msg="  - 10.1.0.252" subsys=daemon
      level=info msg="Creating or updating CiliumNode resource" node=prod-k8s-cp1 subsys=nodediscovery
      level=info msg="Adding local node to cluster" node="{prod-k8s-cp1 default [{InternalIP 10.1.0.252} {CiliumInternalIP 172.21.0.155}] 172.21.0.128/26 <nil> 172.21.0.146 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:prod-k8s-cp1 kubernetes.io/os:linux node-role.kubernetes.io/master:] 6}" subsys=nodediscovery
      level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=172.21.0.155 v4Prefix=172.21.0.128/26 v4healthIP.IPv4=172.21.0.146 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
      level=info msg="Initializing identity allocator" subsys=identity-cache
      level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
      level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
      level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
      level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
      level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
      level=info msg="Adding new proxy port rules for cilium-dns-egress:36923" proxy port name=cilium-dns-egress subsys=proxy
      level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
      level=info msg="Validating configured node address ranges" subsys=daemon
      level=info msg="Starting connection tracking garbage collector" subsys=daemon
      level=info msg="Starting IP identity watcher" subsys=ipcache
      level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
      level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
      level=info msg="Datapath signal listener running" subsys=signal
      level=info msg="Creating host endpoint" subsys=daemon
      level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
      level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1631 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1631 identityLabels="k8s:node-role.kubernetes.io/master,reserved:host" ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1631 identity=1 identityLabels="k8s:node-role.kubernetes.io/master,reserved:host" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
      level=info msg="Launching Cilium health daemon" subsys=daemon
      level=info msg="Launching Cilium health endpoint" subsys=daemon
      level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
      level=info msg="Initializing Cilium API" subsys=daemon
      level=info msg="Daemon initialization completed" bootstrapTime=5.273111833s subsys=daemon
      level=info msg="Configuring Hubble server" eventQueueSize=4096 maxFlows=4095 subsys=hubble
      level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
      level=info msg="Beginning to read perf buffer" startTime="2021-08-23 11:50:02.907610703 +0000 UTC m=+5.334524512" subsys=monitor-agent
      level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
      level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2993 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2993 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2993 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
      level=info msg="Compiled new BPF template" BPFCompilationTime=1.573768687s file-path=/var/run/cilium/state/templates/c2ab4c32b5e410ece1e4936f2ee4756c6b636151/bpf_host.o subsys=datapath-loader
      level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1631 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Compiled new BPF template" BPFCompilationTime=1.308964876s file-path=/var/run/cilium/state/templates/43718f4689bfffe78a6ab433df688d2f9e7bc41a/bpf_lxc.o subsys=datapath-loader
      level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2993 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
      level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble
      level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
      level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.001922607421875 newInterval=7m30s subsys=map-ct
    16. 调整Kubernetes组件的时区及资源限额,直接编辑/etc/kubernetes/manifests/下的文件,一台一台来,先修改etcd
      # 时区调整
          volumeMounts:
      ...
          - mountPath: /etc/localtime
            name: localtime
      
        volumes:
      ...
        - hostPath:
            path: /etc/localtime
            type: ""
          name: localtime

      ---------------------------------------------------------

      # 资源限额
          resources: 
            requests:
              cpu: 100m
              memory: 1024Mi
            limits:
              cpu: 2
              memory: 4096Mi
    17. 设置SYS区节点的label(使用lens直接编辑)
      # PROD-SYS-K8S-WN1/2
      kubernetes.io/env: sys
          kubernetes.io/ingress: prod
          kubernetes.io/resource: addons
    18. 为SYS区节点设置污点驱逐
      <root@PROD-K8S-CP1 ~># kubectl taint node prod-sys-k8s-wn1 resource=addons:NoExecute
      node/prod-sys-k8s-wn1 tainted
      <root@PROD-K8S-CP1 ~># kubectl taint node prod-sys-k8s-wn2 resource=addons:NoExecute
      node/prod-sys-k8s-wn2 tainted
      
      <root@PROD-K8S-CP1 ~># kubectl describe nodes prod-sys-k8s-wn1
      Name:               prod-sys-k8s-wn1
      Roles:              worker
      Labels:             beta.kubernetes.io/arch=amd64
                          beta.kubernetes.io/os=linux
                          kubernetes.io/arch=amd64
                          kubernetes.io/env=sys
                          kubernetes.io/hostname=prod-sys-k8s-wn1
                          kubernetes.io/ingress=prod
                          kubernetes.io/os=linux
                          kubernetes.io/resource=addons
                          node-role.kubernetes.io/worker=worker
      Annotations:        io.cilium.network.ipv4-cilium-host: 172.21.1.177
                          io.cilium.network.ipv4-health-ip: 172.21.1.185
                          io.cilium.network.ipv4-pod-cidr: 172.21.1.128/26
                          kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                          node.alpha.kubernetes.io/ttl: 0
                          volumes.kubernetes.io/controller-managed-attach-detach: true
      CreationTimestamp:  Mon, 23 Aug 2021 20:11:52 +0800
      Taints:             resource=addons:NoExecute
      
      <root@PROD-K8S-CP1 ~># kubectl describe nodes prod-sys-k8s-wn2
      Name:               prod-sys-k8s-wn2
      Roles:              worker
      Labels:             beta.kubernetes.io/arch=amd64
                          beta.kubernetes.io/os=linux
                          kubernetes.io/arch=amd64
                          kubernetes.io/env=sys
                          kubernetes.io/hostname=prod-sys-k8s-wn2
                          kubernetes.io/ingress=prod
                          kubernetes.io/os=linux
                          kubernetes.io/resource=addons
                          node-role.kubernetes.io/worker=worker
      Annotations:        io.cilium.network.ipv4-cilium-host: 172.21.1.65
                          io.cilium.network.ipv4-health-ip: 172.21.1.117
                          io.cilium.network.ipv4-pod-cidr: 172.21.1.64/26
                          kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                          node.alpha.kubernetes.io/ttl: 0
                          volumes.kubernetes.io/controller-managed-attach-detach: true
      CreationTimestamp:  Mon, 23 Aug 2021 20:12:06 +0800
      Taints:             resource=addons:NoExecute
    19. 调整corn-dns / cilium-operator的节点亲和性,lens编辑
      # 调整core-dns的污点容忍
            tolerations:
              - key: CriticalAddonsOnly
                operator: Exists
              - key: node-role.kubernetes.io/master
                effect: NoSchedule
              - key: resource
                value: addons
                effect: NoExecute
      
      # 调整core-dns的节点亲和性
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                    - matchExpressions:
                        - key: kubernetes.io/resource
                          operator: In
                          values:
                            - addons
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: k8s-app
                          operator: In
                          values:
                            - kube-dns
                    topologyKey: kubernetes.io/hostname
      
      ------------------------------------------------------------------------------
      
      # 调整cilium-operator的污点容忍
            tolerations:
              - operator: Exists
      
      # 调整cilium-operator的节点亲和性
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                    - matchExpressions:
                        - key: kubernetes.io/resource
                          operator: In
                          values:
                            - addons
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: io.cilium/app
                          operator: In
                          values:
                            - operator
                    topologyKey: kubernetes.io/hostname
    20. 调整core-dns、ciliume-agent、ciliume-operator的资源限额,lens直接修改

      # core-dns
                resources:
                  limits:
                    cpu: '2'
                    memory: 512Mi
                  requests:
                    cpu: 100m
                    memory: 70Mi
      
      ----------------------------------------------
      
      # cilium-operator
                resources:
                  limits:
                    cpu: '2'
                    memory: 1Gi
                  requests:
                    cpu: 100m
                    memory: 70Mi
      
      ----------------------------------------------
      
      # cilium-agent
      # 注意修改位置,请勿在initContainers执行,资源限额
                resources:
                  limits:
                    cpu: '2'
                    memory: 1Gi
                  requests:
                    cpu: 100m
                    memory: 100Mi
    21. 设置系统资源预留
      maxPods: 32
      systemReserved: cpu: 300m memory: 500Mi kubeReserved: cpu: 300m memory: 500Mi evictionHard: memory.available:
      "300Mi" nodefs.available: "15%" nodefs.inodesFree: "5%" imagefs.available: "15%" evictionMinimumReclaim: memory.available: "0Mi" nodefs.available: "20Gi" imagefs.available: "10Gi" 
    22. 快捷方式
      mkdir /var/log/kubernetes && sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=--logtostderr=false --log-dir=/var/log/kubernetes/g' /etc/sysconfig/kubelet && 
      sed -i '16a maxPods: 32 
      systemReserved: 
        cpu: 300m 
        memory: 300Mi 
      kubeReserved: 
        cpu: 300m 
        memory: 200Mi 
      evictionHard: 
        memory.available: "300Mi" 
        nodefs.available: "15%" 
        nodefs.inodesFree: "5%" 
        imagefs.available: "15%" 
      evictionMinimumReclaim: 
        memory.available: "0Mi" 
        nodefs.available: "20Gi" 
        imagefs.available: "10Gi"' /var/lib/kubelet/config.yaml
    23. Priority设置

      apiVersion: scheduling.k8s.io/v1
      kind: PriorityClass
      metadata:
        name: base-resource
      globalDefault: false
      value: 9999
      description: "system base resource"
      
      ---
      apiVersion: scheduling.k8s.io/v1
      kind: PriorityClass
      metadata:
        name: provider-service
      globalDefault: false
      value: 8888
      description: "dubbo provider service"
      
      ---
      apiVersion: scheduling.k8s.io/v1
      kind: PriorityClass
      metadata:
        name: consumer-service
      globalDefault: false
      value: 7777
      description: "dubbo consumer service"
      
      ---
      apiVersion: scheduling.k8s.io/v1
      kind: PriorityClass
      metadata:
        name: admin-service
      globalDefault: false
      value: 6666
      description: "system management"
      
      ---
      apiVersion: scheduling.k8s.io/v1
      kind: PriorityClass
      metadata:
        name: monitor-service
      globalDefault: false
      value: 5555
      description: "system monitor service for prometheus"
  • 相关阅读:
    人月神话阅读笔记之一
    第一阶段冲刺站立会议报告——9
    第一阶段冲刺站立会议报告——8
    第一阶段冲刺站立会议报告——7
    第一阶段冲刺站立会议报告——6
    第一阶段冲刺站立会议报告——5
    第二阶段冲刺第一天
    寻找水王2
    构建之法阅读笔记05
    第十二周学习进度条
  • 原文地址:https://www.cnblogs.com/apink/p/15176060.html
Copyright © 2020-2023  润新知