• 安装K8S集群1.17版本(euleros系统通用)


    一、准备实验环境

    1.准备两台centos7虚拟机,用来安装k8s集群,下面是两台虚拟机的配置情况

    k8s-master192.168.1.237)配置:

    操作系统:centos7.4、centos7.5、centos7.6以及更高版本都可以配置:4核cpu,8G内存,两块60G硬盘网络:桥接网络

    k8s-node1192.168.1.238)配置:

    操作系统:centos7.6配置:4核cpu,4G内存,两块60G硬盘网络:桥接网络

    k8s-node2192.168.1.244)配置:

    操作系统:euleros SP5配置:4核cpu,4G内存,两块60G硬盘网络:桥接网络

    2.安装基础软件包,各个节点操作

    yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release lrzsz  openssh-server socat  ipvsadm conntrack

     

    3.关闭firewalld防火墙,各个节点操作,centos7系统默认使用的是firewalld防火墙,停止firewalld防火墙,并禁用这个服务

    systemctl  stop firewalld  && systemctl  disable  firewalld

     

    4.安装iptables,各个节点操作,如果你用firewalld不是很习惯,可以安装iptables,这个步骤可以不做,根据大家实际需求

    4.1 安装iptables

    yum install iptables-services -y

    4.2 禁用iptables

    service iptables stop   && systemctl disable iptables

    5.时间同步,各个节点操作

     

    5.1 时间同步

    ntpdate cn.pool.ntp.org

    5.2 编辑计划任务,每小时做一次同步

    crontab -e

    * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org

    6. 关闭selinux,各个节点操作

    关闭selinux,设置永久关闭,这样重启机器selinux也处于关闭状态

    修改/etc/sysconfig/selinux文件

    sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux/config

    上面文件修改之后,需要重启虚拟机,可以强制重启:

    reboot -r

    7.关闭交换分区,各个节点操作

    swapoff -a# 永久禁用,打开/etc/fstab注释掉swap那一行。sed -i 's/.*swap.*/#&/' /etc/fstab

    8.优化内核参数

    cat > /etc/sysctl.d/kubernetes.conf << EOF

     net.bridge.bridge-nf-call-iptables=1

     net.bridge.bridge-nf-call-ip6tables=1

     net.ipv4.ip_forward=1

     net.ipv4.tcp_tw_recycle=0

     vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它

     vm.overcommit_memory=1 # 不检查物理内存是否够用

     vm.panic_on_oom=0 # 开启 OOM

     fs.inotify.max_user_instances=8192

     fs.inotify.max_user_watches=1048576

     fs.file-max=52706963

     fs.nr_open=52706963

     net.ipv6.conf.all.disable_ipv6=1

     net.netfilter.nf_conntrack_max=2310720

    EOF

     

    sysctl -p /etc/sysctl.d/kubernetes.conf

    9.设置主机名&修改hosts文件

    每台服务器上设置主机名以及修改hosts文件

    # 设置主机名

    vi /etc/hostname# 每台服务器上设置自己的主机名,修改hosts文件

    vi /etc/hosts    # 设定节点ip对应关系

    192.168.1.237 k8s-master

    192.168.1.238 k8s-node1

    192.168.1.244 k8s-node2

     

    10.master免密登录其它节点

    此处以k8s-master作为masterk8s-node1k8snode2作为node,只在master上进行操作

    创建密钥

    [root@K8S00 ~]# ssh-keygen -t rsa  #一直回车

     

    Generating public/private rsa key pair.

    Enter file in which to save the key (/root/.ssh/id_rsa):

    Created directory '/root/.ssh'.

    Enter passphrase (empty for no passphrase):

    Enter same passphrase again:

    Your identification has been saved in /root/.ssh/id_rsa.

    Your public key has been saved in /root/.ssh/id_rsa.pub.

    The key fingerprint is:

    SHA256:c9mUKafubFrLnpZGEnQflcSYu7KPc64Mz/75WPwLvJY root@K8S00

    The key's randomart image is:

    +---[RSA 2048]----+

    |             *o. |

    |        . . +oo  |

    |       . ...=o   |

    |        .  Bo    |

    |        S.+ ..   |

    |        .+o o.   |

    |        .oo+ o+  |

    |         XB+.Eo. |

    |        .B#BBo..o|

    +----[SHA256]-----+

    分发密钥

    设置 K8S01 root 账户可以无密码登录所有节点

    ssh-copy-id -i .ssh/id_rsa.pub root@k8s-node1

    ssh-copy-id -i .ssh/id_rsa.pub root@k8s-node1

    #上面需要输入密码,输入k8s-node物理机密码即可

    11.命令补全

    安装bash-completion

     yum -y install bash-completion

    加载bash-completion

    source /etc/profile.d/bash_completion.sh

     

    、安装kubernetes1.17.3

    1.修改yum源,各个节点操作

    1)备份原来的yum

    mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

    2)下载阿里的yum

    centos7:

    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

     

    Euleros SP5:

    cat /etc/yum.repos.d/EulerOS.repo

    [base]

    name=EulerOS-2.0SP5 base

    baseurl=http://repo.huaweicloud.com/euler/2.5/os/x86_64/

    enabled=1

    gpgcheck=1

    gpgkey=http://repo.huaweicloud.com/euler/2.5/os/RPM-GPG-KEY-EulerOS

    3)生成新的yum缓存

    yum makecache fast

    4)配置安装k8s需要的yum

    cat /etc/yum.repos.d/kubernetes.repo

    [kubernetes]

    name=Kubernetes

    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

    enabled=1

    gpgcheck=0

     

    5)清理yum缓存

    yum clean all

    6)生成新的yum缓存

    yum makecache fast

    7)更新软件包

    yum -y update

    8)安装软件包

    yum -y install yum-utils device-mapper-persistent-data lvm2

    9)添加新的软件源

    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

    euleros系统可直接安装docker软件

    2.安装docker19.03,各个节点操作

    2.1 查看支持的docker

    yum list docker-ce --showduplicates |sort -r

    2.2 下载19.03版本

    yum install -y docker-ce-19*systemctl enable docker && systemctl start docker

    #查看docker状态,如果状态是active(running),说明docker是正常运行状态systemctl status docker  

    2.3 修改docker配置文件,配置镜像加速器

    [root@k8s-master ~]# cat > /etc/docker/daemon.json <<EOF

    {

    "exec-opts": ["native.cgroupdriver=systemd"],

    "log-driver": "json-file",

    "log-opts": {

    "max-size": "100m"

    },

    "storage-driver": "overlay2",

    "storage-opts": [

    "overlay2.override_kernel_check=true"

    ]

    }

    EOF

    2.4 重启docker使配置生效

    systemctl restart docker

     

    3.安装kubernetes1.17.3

    3.1 k8s-masterk8s-node上安装kubeadmkubelet

    yum install kubeadm-1.17.3 kubelet-1.17.3

    systemctl enable kubelet

    3.2上传镜像到k8s_masterk8s_node节点之后,手动解压镜像,镜像在百度网盘,我的是从官方下载的镜像,大家可以放心使用

    链接:https://pan.baidu.com/s/1eiU69FKkgz0wBl5Lf1ClHw
    提取码:vrgv

    docker load -i  kube-apiserver.tar.gz

    docker load -i   kube-scheduler.tar.gz

    docker load -i   kube-controller-manager.tar.gz

    docker load -i  pause.tar.gz

    docker load -i  cordns.tar.gz

    docker load -i  etcd.tar.gz

    docker load -i  kube-proxy.tar.gz

    docker load -i cni.tar.gz

    docker load -i calico-node.tar.gz
    docker load -i  kubernetes-dashboard_1_10.tar.gz

    docker load -i  metrics-server-amd64_0_3_1.tar.gz

    docker load -i  addon.tar.gz

    说明:

    pause版本是3.1etcd版本是3.4.3          

    cordns版本是1.6.5 

    cni版本是3.5.3 

    calico版本是3.5.3         

    apiserver、scheduler、controller-manager、kube-proxy版本是1.17.3 

    kubernetes dashboard版本1.10.1 

    metrics-server版本0.3.1         

    addon-resizer版本是1.8.4

     

    3.3 在k8s_master节点初始化k8s集群
    kubeadm init --kubernetes-version=v1.17.3 --pod-network-cidr=10.244.0.0/16  --apiserver-advertise-address=192.168.1.237

     

    显示如下,说明初始化成功了

    To start using your cluster, you need to run the following as a regular user:
      mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config
    You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/
    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join 192.168.124.16:6443 --token i9m8e5.z12dnlekjbmuebsk    --discovery-token-ca-cert-hash sha256:2dc931b4508137fbe1bcb93dc84b7332e7e874ec5862a9e8b8fff9f7c2b57621

     

    注:

    kubeadm join ... 这条命令需要记住,下面我们把k8snode节点加入到集群需要在node节点输入这条命令,每次执行这个结果都是不一样的,

    大家记住自己执行的结果,在下面会用到

     

    3.4 k8s-master节点执行如下,这样才能有权限操作k8s资源

    mkdir -p $HOME/.kube

    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    chown $(id -u):$(id -g) $HOME/.kube/config

    kubectl get nodes 

    显示如下

    NAME         STATUS     ROLES    AGE     VERSIONk8s-master   NotReady   master   2m13s   v1.17.3

    3.5 k8s-node节点加入到k8s集群,在node节点操作

    kubeadm join 192.168.1.237:6443 --token i9m8e5.z12dnlekjbmuebsk    --discovery-token-ca-cert-hash sha256:2dc931b4508137fbe1bcb93dc84b7332e7e874ec5862a9e8b8fff9f7c2b57621

    注:上面的这个加入到k8s节点的一串命令就是在3.3初始化的时候生成的

    3.6 k8s-master节点查看集群节点状态
    kubectl get nodes  看到Status状态都是NotReady,这是因为没有安装网络插件,如calico或者flannel

    3.7 k8s-master节点安装calico网络插件
    master节点执行如下:
    kubectl apply -f calico.yaml

     

    # Calico Version v3.5.3
    # https://docs.projectcalico.org/v3.5/releases#v3.5.3
    # This manifest includes the following component versions:
    #   calico/node:v3.5.3
    #   calico/cni:v3.5.3
    
    # This ConfigMap is used to configure a self-hosted Calico installation.
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: calico-config
      namespace: kube-system
    data:
      # Typha is disabled.
      typha_service_name: "none"
      # Configure the Calico backend to use.
      calico_backend: "bird"
    
      # Configure the MTU to use
      veth_mtu: "1440"
    
      # The CNI network configuration to install on each node.  The special
      # values in this config will be automatically populated.
      cni_network_config: |-
        {
          "name": "k8s-pod-network",
          "cniVersion": "0.3.0",
          "plugins": [
            {
              "type": "calico",
              "log_level": "info",
              "datastore_type": "kubernetes",
              "nodename": "__KUBERNETES_NODE_NAME__",
              "mtu": __CNI_MTU__,
              "ipam": {
                "type": "host-local",
                "subnet": "usePodCidr"
              },
              "policy": {
                  "type": "k8s"
              },
              "kubernetes": {
                  "kubeconfig": "__KUBECONFIG_FILEPATH__"
              }
            },
            {
              "type": "portmap",
              "snat": true,
              "capabilities": {"portMappings": true}
            }
          ]
        }
    
    ---
    
    # This manifest installs the calico/node container, as well
    # as the Calico CNI plugins and network config on
    # each master and worker node in a Kubernetes cluster.
    kind: DaemonSet
    apiVersion: apps/v1
    metadata:
      name: calico-node
      namespace: kube-system
      labels:
        k8s-app: calico-node
    spec:
      selector:
        matchLabels:
          k8s-app: calico-node
      updateStrategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      template:
        metadata:
          labels:
            k8s-app: calico-node
          annotations:
            # This, along with the CriticalAddonsOnly toleration below,
            # marks the pod as a critical add-on, ensuring it gets
            # priority scheduling and that its resources are reserved
            # if it ever gets evicted.
            scheduler.alpha.kubernetes.io/critical-pod: ''
        spec:
          nodeSelector:
            beta.kubernetes.io/os: linux
          hostNetwork: true
          tolerations:
            # Make sure calico-node gets scheduled on all nodes.
            - effect: NoSchedule
              operator: Exists
            # Mark the pod as a critical add-on for rescheduling.
            - key: CriticalAddonsOnly
              operator: Exists
            - effect: NoExecute
              operator: Exists
          serviceAccountName: calico-node
          # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
          # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
          terminationGracePeriodSeconds: 0
          initContainers:
            # This container installs the Calico CNI binaries
            # and CNI network config file on each node.
            - name: install-cni
              image: quay.io/calico/cni:v3.5.3
              command: ["/install-cni.sh"]
              env:
                # Name of the CNI config file to create.
                - name: CNI_CONF_NAME
                  value: "10-calico.conflist"
                # The CNI network config to install on each node.
                - name: CNI_NETWORK_CONFIG
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: cni_network_config
                # Set the hostname based on the k8s node name.
                - name: KUBERNETES_NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                # CNI MTU Config variable
                - name: CNI_MTU
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: veth_mtu
                # Prevents the container from sleeping forever.
                - name: SLEEP
                  value: "false"
              volumeMounts:
                - mountPath: /host/opt/cni/bin
                  name: cni-bin-dir
                - mountPath: /host/etc/cni/net.d
                  name: cni-net-dir
          containers:
            # Runs calico/node container on each Kubernetes node.  This
            # container programs network policy and routes on each
            # host.
            - name: calico-node
              image: quay.io/calico/node:v3.5.3
              env:
                # Use Kubernetes API as the backing datastore.
                - name: DATASTORE_TYPE
                  value: "kubernetes"
                # Wait for the datastore.
                - name: WAIT_FOR_DATASTORE
                  value: "true"
                # Set based on the k8s node name.
                - name: NODENAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                # Choose the backend to use.
                - name: CALICO_NETWORKING_BACKEND
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: calico_backend
                # Cluster type to identify the deployment type
                - name: CLUSTER_TYPE
                  value: "k8s,bgp"
                # Auto-detect the BGP IP address.
                - name: IP
                  value: "autodetect"
                - name: IP_AUTODETECTION_METHOD
                  value: "can-reach=192.168.124.56"
                # Enable IPIP
                - name: CALICO_IPV4POOL_IPIP
                  value: "Always"
                # Set MTU for tunnel device used if ipip is enabled
                - name: FELIX_IPINIPMTU
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: veth_mtu
                # The default IPv4 pool to create on startup if none exists. Pod IPs will be
                # chosen from this range. Changing this value after installation will have
                # no effect. This should fall within `--cluster-cidr`.
                - name: CALICO_IPV4POOL_CIDR
                  value: "10.244.0.0/16"
                # Disable file logging so `kubectl logs` works.
                - name: CALICO_DISABLE_FILE_LOGGING
                  value: "true"
                # Set Felix endpoint to host default action to ACCEPT.
                - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                  value: "ACCEPT"
                # Disable IPv6 on Kubernetes.
                - name: FELIX_IPV6SUPPORT
                  value: "false"
                # Set Felix logging to "info"
                - name: FELIX_LOGSEVERITYSCREEN
                  value: "info"
                - name: FELIX_HEALTHENABLED
                  value: "true"
              securityContext:
                privileged: true
              resources:
                requests:
                  cpu: 250m
              livenessProbe:
                httpGet:
                  path: /liveness
                  port: 9099
                  host: localhost
                periodSeconds: 10
                initialDelaySeconds: 10
                failureThreshold: 6
              readinessProbe:
                exec:
                  command:
                  - /bin/calico-node
                  - -bird-ready
                  - -felix-ready
                periodSeconds: 10
              volumeMounts:
                - mountPath: /lib/modules
                  name: lib-modules
                  readOnly: true
                - mountPath: /run/xtables.lock
                  name: xtables-lock
                  readOnly: false
                - mountPath: /var/run/calico
                  name: var-run-calico
                  readOnly: false
                - mountPath: /var/lib/calico
                  name: var-lib-calico
                  readOnly: false
          volumes:
            # Used by calico/node.
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: var-run-calico
              hostPath:
                path: /var/run/calico
            - name: var-lib-calico
              hostPath:
                path: /var/lib/calico
            - name: xtables-lock
              hostPath:
                path: /run/xtables.lock
                type: FileOrCreate
            # Used to install CNI.
            - name: cni-bin-dir
              hostPath:
                path: /opt/cni/bin
            - name: cni-net-dir
              hostPath:
                path: /etc/cni/net.d
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: calico-node
      namespace: kube-system
    
    ---
    # Create all the CustomResourceDefinitions needed for
    # Calico policy and networking mode.
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
       name: felixconfigurations.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: FelixConfiguration
        plural: felixconfigurations
        singular: felixconfiguration
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: bgppeers.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: BGPPeer
        plural: bgppeers
        singular: bgppeer
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: bgpconfigurations.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: BGPConfiguration
        plural: bgpconfigurations
        singular: bgpconfiguration
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: ippools.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: IPPool
        plural: ippools
        singular: ippool
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: hostendpoints.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: HostEndpoint
        plural: hostendpoints
        singular: hostendpoint
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: clusterinformations.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: ClusterInformation
        plural: clusterinformations
        singular: clusterinformation
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: globalnetworkpolicies.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: GlobalNetworkPolicy
        plural: globalnetworkpolicies
        singular: globalnetworkpolicy
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: globalnetworksets.crd.projectcalico.org
    spec:
      scope: Cluster
      group: crd.projectcalico.org
      version: v1
      names:
        kind: GlobalNetworkSet
        plural: globalnetworksets
        singular: globalnetworkset
    
    ---
    
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: networkpolicies.crd.projectcalico.org
    spec:
      scope: Namespaced
      group: crd.projectcalico.org
      version: v1
      names:
        kind: NetworkPolicy
        plural: networkpolicies
        singular: networkpolicy
    ---
    
    # Include a clusterrole for the calico-node DaemonSet,
    # and bind it to the calico-node serviceaccount.
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: calico-node
    rules:
      # The CNI plugin needs to get pods, nodes, and namespaces.
      - apiGroups: [""]
        resources:
          - pods
          - nodes
          - namespaces
        verbs:
          - get
      - apiGroups: [""]
        resources:
          - endpoints
          - services
        verbs:
          # Used to discover service IPs for advertisement.
          - watch
          - list
          # Used to discover Typhas.
          - get
      - apiGroups: [""]
        resources:
          - nodes/status
        verbs:
          # Needed for clearing NodeNetworkUnavailable flag.
          - patch
          # Calico stores some configuration information in node annotations.
          - update
      # Watch for changes to Kubernetes NetworkPolicies.
      - apiGroups: ["networking.k8s.io"]
        resources:
          - networkpolicies
        verbs:
          - watch
          - list
      # Used by Calico for policy information.
      - apiGroups: [""]
        resources:
          - pods
          - namespaces
          - serviceaccounts
        verbs:
          - list
          - watch
      # The CNI plugin patches pods/status.
      - apiGroups: [""]
        resources:
          - pods/status
        verbs:
          - patch
      # Calico monitors various CRDs for config.
      - apiGroups: ["crd.projectcalico.org"]
        resources:
          - globalfelixconfigs
          - felixconfigurations
          - bgppeers
          - globalbgpconfigs
          - bgpconfigurations
          - ippools
          - globalnetworkpolicies
          - globalnetworksets
          - networkpolicies
          - clusterinformations
          - hostendpoints
        verbs:
          - get
          - list
          - watch
      # Calico must create and update some CRDs on startup.
      - apiGroups: ["crd.projectcalico.org"]
        resources:
          - ippools
          - felixconfigurations
          - clusterinformations
        verbs:
          - create
          - update
      # Calico stores some configuration information on the node.
      - apiGroups: [""]
        resources:
          - nodes
        verbs:
          - get
          - list
          - watch
      # These permissions are only requried for upgrade from v2.6, and can
      # be removed after upgrade or on fresh installations.
      - apiGroups: ["crd.projectcalico.org"]
        resources:
          - bgpconfigurations
          - bgppeers
        verbs:
          - create
          - update
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: calico-node
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: calico-node
    subjects:
    - kind: ServiceAccount
      name: calico-node
      namespace: kube-system
    ---
    View Code

     

    k8s-master节点查看calico是否处于running状态

    kubectl  get pods -n kube-system

    显示如下,说明calico部署正常,如果calico没有部署成功,cordns会一直显示在ContainerCreating

    calico-node-rkklw                1/1     Running   0          3m4scalico-node-rnzfq            1/1     Running   0          3m4scoredns-6955765f44-jzm4k     1/1     Running   0          25mcoredns-6955765f44-mmbr7    1/1   Running  0  25m

    k8s-master节点查看STATUS状态
    kubectl get nodes 

    显示如下,STATUS状态是ready,表示集群处于正常状态

    NAME       STATUS  ROLES   AGE      VERSION

    k8s-master  Ready  master   9m52s   v1.17.3

    k8s-node1   Ready   <none>   6m10s   v1.17.3

    k8s-node2   Ready   <none>   6m10s   v1.17.3

     

     

     

     

     

    嗨~如果有帮助,请帮忙点个赞吧,谢谢 -致敬每一个正在努力的人
  • 相关阅读:
    数据库基本设计
    servlet 高级知识之Listener
    servlet 高级知识之Filter
    servlet-生命周期
    http协议概述
    javase高级技术
    javase高级技术
    IO之4种字节流拷贝文件方式对比
    Map 概述
    图解 数组,链表,2种数据结构
  • 原文地址:https://www.cnblogs.com/dongweizhen/p/13824784.html
Copyright © 2020-2023  润新知