• K8S系列-1.离线部署K8S集群


    K8S系列-1.离线部署K8S集群

    新版KubeSphere使用kubekey工具kk一键部署K8S集群

    主机规划

    内网IP 虚拟机实例名 角色 Hostname 配置 OS
    192.168.56.108 node01-master etcd、master、worker、docker registry node1 CPU: 2 core,mem: 2G,HDD:100G CentOS 7.6
    192.168.56.109 node01-worker worker node2 CPU: 2 core,mem: 2G,HDD:8G CentOS 7.6
    192.168.56.110 K8S-node02 worker Node3 CPU: 2 core,mem: 2G,HDD:8G CentOS 7.6

    注:使用virtualbox的HostOnly组建192.168.56.0作为三台虚拟机实例相互访问的子网网段;

    部署前准备

    1.更换软件源

    切换CentOS YUM源为阿里云yum源

    # 部署wget
    yum install wget -y
    # 备份
    mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
    # 获取阿里云yum源
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    # 获取阿里云epel源
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    # 清理缓存并创建新的缓存
    yum clean all && yum makecache
    

    2.时间同步

    进行并确认时间同步成功

      timedatectl
      timedatectl set-ntp true
    

    3.部署 Docker

    ![image-20210522133602100](/Users/hackun/Library/Application Support/typora-user-images/image-20210522133602100.png)

    KubeSphere离线安装包中默认安装的K8S版本为1.17.9,对应的docker版本为:19.03

    为了避免不必要的版本问题,首先需要在三台机器上部署 Docker,我这里使用aliyun源来在线安装,部署的是 docker-ce-19.03.4

    # 部署 Docker CE
    # 设置仓库
    # 部署所需包
    yum install -y yum-utils 
        device-mapper-persistent-data 
        lvm2
    
    # 新增 Docker 仓库,速度慢的可以换阿里云的源。
    yum-config-manager 
        --add-repo 
        https://download.docker.com/linux/centos/docker-ce.repo
    # 阿里云源地址
    # 新增 Docker 仓库,速度慢的可以换阿里云的源。
    yum-config-manager 
        --add-repo 
    		http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    # 部署 Docker CE.
    yum install -y containerd.io-1.2.10 
        docker-ce-19.03.4 
        docker-ce-cli-19.03.4
    
    # 启动 Docker 并添加开机启动
    systemctl start docker
    systemctl enable docker
    

    此时三台虚拟机实例需要打开VirtualBox中的网络适配器adapter1中的NATnetwork选项(),虚拟机实例中对应enp0s3网络端口

    开始部署K8S

    1. 创建集群配置文件

    [root@localhost ~]# tar xzf kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz 
    [root@localhost ~]# cd kubesphere-all-v3.0.0-offline-linux-amd64
    
    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create config
    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ll
    total 55156
    drwxr-xr-x. 5 root root       76 Sep 21 05:36 charts
    -rw-r--r--. 1 root root      759 Sep 26 09:10 config-sample.yaml
    drwxr-xr-x. 2 root root      116 Sep 21 06:01 dependencies
    -rwxr-xr-x. 1 root root 56469720 Sep 21 01:54 kk
    drwxr-xr-x. 6 root root       68 Sep  3 01:45 kubekey
    drwxr-xr-x. 2 root root     4096 Sep 21 06:54 kubesphere-images-v3.0.0
    
    

    2.修改配置文件

    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# cat  config-sample.yaml 
    apiVersion: kubekey.kubesphere.io/v1alpha1
    kind: Cluster
    metadata:
      name: sample
    spec:
      hosts:
      - {name: node1, address: 192.168.56.108, internalAddress: 192.168.56.108, user: root, password: kkroot}
      - {name: node2, address: 192.168.56.109, internalAddress: 192.168.56.109, user: root, password: kkroot}
      - {name: node3, address: 192.168.56.110, internalAddress: 192.168.56.110, user: root, password: kkroot}
      roleGroups:
        etcd:
        - node1
        master: 
        - node1
        worker:
        - node1
        - node2
        - node3
      controlPlaneEndpoint:
        domain: lb.kubesphere.local
        address: ""
        port: "6443"
      kubernetes:
        version: v1.17.9
        imageRepo: kubesphere
        clusterName: cluster.local
      network:
        plugin: calico
        kubePodsCIDR: 10.233.64.0/18
        kubeServiceCIDR: 10.233.0.0/18
      registry:
        registryMirrors: []
        insecureRegistries: []
        privateRegistry: dockerhub.kubekey.local
      addons: []
    

    修改node1、node2、node3节点主机IP相关配置,registry添加privateRegistry: dockerhub.kubekey.local

    3.检查依赖

    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/
    INFO[07:23:15 EDT] Init operating system 
    INFO[07:19:58 EDT] Start initializing node2 [192.168.56.109]     node=192.168.56.109
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 192.168.56.109:/tmp   Done
    INFO[07:21:12 EDT] Complete initialization node2 [192.168.56.109]  node=192.168.56.109
    
    INFO[07:23:20 EDT] Start initializing node3 [192.168.56.110]     node=192.168.56.110
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 192.168.56.110:/tmp   Done
    INFO[07:24:27 EDT] Complete initialization node3 [192.168.56.110]  node=192.168.56.110
    INFO[07:24:27 EDT] Init operating system successful.  
    
    

    4.创建镜像仓库

    使用kk创建自签名镜像仓库,执行如下命令:

    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
    
    INFO[07:26:32 EDT] Init operating system                        
    
    Local images repository created successfully. Address: dockerhub.kubekey.local
    
    INFO[07:27:03 EDT] Init operating system successful.            
    
    

    如果没有反应

    [root@localhost kubesphere-images-v3.0.0]# docker load < registry.tar 
    3e207b409db3: Loading layer  5.879MB/5.879MB
    f5b9430e0e42: Loading layer  817.2kB/817.2kB
    239a096513b5: Loading layer  20.08MB/20.08MB
    a5f27630cdd9: Loading layer  3.584kB/3.584kB
    b3f465d7c4d1: Loading layer  2.048kB/2.048kB
    Loaded image: registry:2
    
    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# systemctl start docker
    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
    INFO[10:45:37 EDT] Init operating system                        
    
    Local images repository created successfully. Address: dockerhub.kubekey.local
    
    INFO[10:45:39 EDT] Init operating system successful.          
    
    

    注:建议为docker镜像仓库挂载单独的存储节点,默认位置在/mnt/registry ,具体可参考:

    VirtualBox虚拟Centos磁盘文件扩容中的添加并挂载虚拟硬盘章节

    5.加载、上传镜像

    使用push-images.sh将镜像导入之前准备好镜像仓库中:

    ./push-images.sh  dockerhub.kubekey.local
    

    脚本会获取必要的镜像并重新上传到私有registry仓库 dockerhub.kubekey.local

    [root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog --cacert /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt 
    {"repositories":["calico/cni","calico/kube-controllers","calico/node","calico/pod2daemon-flexvol","coredns/coredns","csiplugin/csi-attacher","csiplugin/csi-neonsan","csiplugin/csi-neonsan-centos","csiplugin/csi-neonsan-ubuntu","csiplugin/csi-node-driver-registrar","csiplugin/csi-provisioner","csiplugin/csi-qingcloud","csiplugin/csi-resizer","csiplugin/csi-snapshotter","csiplugin/snapshot-controller","fluent/fluentd","istio/citadel","istio/galley","istio/kubectl","istio/mixer","istio/pilot","istio/proxyv2","istio/sidecar_injector","jaegertracing/jaeger-agent","jaegertracing/jaeger-collector","jaegertracing/jaeger-es-index-cleaner","jaegertracing/jaeger-operator","jaegertracing/jaeger-query","jenkins/jenkins","jenkins/jnlp-slave","jimmidyson/configmap-reload","joosthofman/wget","kubesphere/alert-adapter","kubesphere/alerting","kubesphere/alerting-dbinit","kubesphere/builder-base","kubesphere/builder-go","kubesphere/builder-maven","kubesphere/builder-nodejs","kubesphere/elasticsearch-oss","kubesphere/etcd","kubesphere/examples-bookinfo-details-v1","kubesphere/examples-bookinfo-productpage-v1","kubesphere/examples-bookinfo-ratings-v1","kubesphere/examples-bookinfo-reviews-v1","kubesphere/examples-bookinfo-reviews-v2","kubesphere/examples-bookinfo-reviews-v3","kubesphere/fluent-bit","kubesphere/fluentbit-operator","kubesphere/java-11-centos7","kubesphere/java-11-runtime","kubesphere/java-8-centos7","kubesphere/java-8-runtime","kubesphere/jenkins-uc","kubesphere/k8s-dns-node-cache","kubesphere/ks-apiserver","kubesphere/ks-console","kubesphere/ks-controller-manager","kubesphere/ks-devops","kubesphere/ks-installer","kubesphere/ks-upgrade","kubesphere/kube-apiserver","kubesphere/kube-auditing-operator","kubesphere/kube-auditing-webhook","kubesphere/kube-controller-manager","kubesphere/kube-events-exporter","kubesphere/kube-events-operator","kubesphere/kube-events-ruler","kubesphere/kube-proxy","kubesphere/kube-rbac-proxy","kubesphere/kube-scheduler","kubesphere/kube-state-metrics","kubesphere/kubectl","kubesphere/linux-utils","kubesphere/log-sidecar-injector","kubesphere/metrics-server","kubesphere/netshoot","kubesphere/nfs-client-provisioner","kubesphere/nginx-ingress-controller","kubesphere/node-disk-manager","kubesphere/node-disk-operator","kubesphere/node-exporter","kubesphere/nodejs-4-centos7","kubesphere/nodejs-6-centos7","kubesphere/nodejs-8-centos7","kubesphere/notification","kubesphere/notification-manager","kubesphere/notification-manager-operator","kubesphere/pause","kubesphere/prometheus-config-reloader","kubesphere/prometheus-operator","kubesphere/provisioner-localpv","kubesphere/python-27-centos7","kubesphere/python-34-centos7","kubesphere/python-35-centos7","kubesphere/python-36-centos7","kubesphere/s2i-binary","kubesphere/s2ioperator","kubesphere/s2irun","kubesphere/tomcat85-java11-centos7"]}
    
    

    以上准备工作完成且再次检查配置文件无误后,执行部署。

    执行部署

    注:确认虚拟机的NATnetwork处于关闭状态,即虚拟机实例中的enp0s3节点不存在,这点比较重要,否则后续再安装网络插件calico的时候,很可能绑定上了一个不靠谱的IP地址。

    1.查看节点依赖情况

    [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
    +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
    | name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
    +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
    | node3 | y    | y    | y       | y        |       | y     |           | y      |            |             |                  | EDT 10:21:39 |
    | node1 | y    | y    | y       | y        |       | y     |           | y      |            |             |                  | EDT 10:21:40 |
    | node2 | y    | y    | y       | y        |       | y     |           | y      |            |             |                  | EDT 10:21:39 |
    +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
    

    2.补全缺少的依赖

    cd /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms
    [root@localhost centos-7-amd64-rpms]#
    yum localinstall -y socat-1.7.3.2-2.el7.x86_64.rpm 
    
    [root@localhost centos-7-amd64-rpms]# 
    yum localinstall -y conntrack-tools-1.4.4-7.el7.x86_64.rpm 
    
    [root@localhost centos-7-amd64-rpms]#
    yum localinstall -y nfs-utils-1.3.0-0.66.el7_8.x86_64.rpm 
    
    [root@localhost centos-7-amd64-rpms]# 
    yum localinstall -y ceph-common-10.2.5-4.el7.x86_64.rpm 
    
    [root@localhost centos-7-amd64-rpms]# 
    yum localinstall -y glusterfs-client-xlators-6.0-29.el7.x86_64.rpm 
    [root@localhost centos-7-amd64-rpms]#
    yum localinstall -y glusterfs-6.0-29.el7.x86_64.rpm 
    [root@localhost centos-7-amd64-rpms]# 
    yum localinstall -y glusterfs-fuse-6.0-29.el7.x86_64.rpm 
    
    

    3.再次执行部署

    
    [root@node1 kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
    +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
    | name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
    +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
    | node3 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
    | node1 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
    | node2 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
    +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
    
    This is a simple check of your environment.
    Before installation, you should ensure that your machines meet all requirements specified at
    https://github.com/kubesphere/kubekey#requirements-and-recommendations
    
    Continue this installation? [yes/no]: yes
    INFO[10:53:49 EDT] Downloading Installation Files               
    INFO[10:53:49 EDT] Downloading kubeadm ...                      
    INFO[10:53:49 EDT] Downloading kubelet ...                      
    INFO[10:53:50 EDT] Downloading kubectl ...                      
    INFO[10:53:50 EDT] Downloading kubecni ...                      
    INFO[10:53:50 EDT] Downloading helm ...                         
    INFO[10:53:51 EDT] Configurating operating system ...           
    [node2 192.168.56.109] MSG:
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    [node1 192.168.56.108] MSG:
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    [node3 192.168.56.110] MSG:
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    INFO[10:53:54 EDT] Installing docker ...                        
    INFO[10:53:55 EDT] Start to download images on all nodes        
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
    [node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
    [node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
    [node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
    [node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
    [node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
    [node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
    [node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
    [node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
    [node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
    [node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
    [node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
    [node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
    [node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
    [node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
    [node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
    [node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
    [node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
    INFO[10:53:59 EDT] Generating etcd certs                        
    INFO[10:54:01 EDT] Synchronizing etcd certs                     
    INFO[10:54:01 EDT] Creating etcd service                        
    INFO[10:54:05 EDT] Starting etcd cluster                        
    [node1 192.168.56.108] MSG:
    Configuration file already exists
    Waiting for etcd to start
    INFO[10:54:13 EDT] Refreshing etcd configuration                
    INFO[10:54:13 EDT] Backup etcd data regularly                   
    INFO[10:54:14 EDT] Get cluster status                           
    [node1 192.168.56.108] MSG:
    Cluster will be created.
    INFO[10:54:14 EDT] Installing kube binaries                     
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.108:/tmp/kubekey/kubeadm   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.110:/tmp/kubekey/kubeadm   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.109:/tmp/kubekey/kubeadm   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.108:/tmp/kubekey/kubelet   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.108:/tmp/kubekey/kubectl   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.108:/tmp/kubekey/helm   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.110:/tmp/kubekey/kubelet   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.108:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.109:/tmp/kubekey/kubelet   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.110:/tmp/kubekey/kubectl   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.109:/tmp/kubekey/kubectl   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.110:/tmp/kubekey/helm   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.109:/tmp/kubekey/helm   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.109:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
    Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.110:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
    INFO[10:54:32 EDT] Initializing kubernetes cluster              
    [node1 192.168.56.108] MSG:
    W1002 10:54:33.546978    7304 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
    W1002 10:54:33.547575    7304 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1002 10:54:33.547601    7304 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.9
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 10.0.2.15 127.0.0.1 192.168.56.108 192.168.56.109 192.168.56.110 10.233.0.1]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] External etcd mode: Skipping etcd/ca certificate authority generation
    [certs] External etcd mode: Skipping etcd/server certificate generation
    [certs] External etcd mode: Skipping etcd/peer certificate generation
    [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
    [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
    W1002 10:54:39.078002    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
    W1002 10:54:39.089428    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
    W1002 10:54:39.091411    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 26.007113 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: rajfez.t9320hox3sddbowz
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz 
        --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2 
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz 
        --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
    [node1 192.168.56.108] MSG:
    node/node1 untainted
    [node1 192.168.56.108] MSG:
    node/node1 labeled
    [node1 192.168.56.108] MSG:
    service "kube-dns" deleted
    [node1 192.168.56.108] MSG:
    service/coredns created
    [node1 192.168.56.108] MSG:
    serviceaccount/nodelocaldns created
    daemonset.apps/nodelocaldns created
    [node1 192.168.56.108] MSG:
    configmap/nodelocaldns created
    [node1 192.168.56.108] MSG:
    I1002 10:55:34.720063    9901 version.go:251] remote version is much newer: v1.19.2; falling back to: stable-1.17
    W1002 10:55:36.884062    9901 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1002 10:55:36.884090    9901 validation.go:28] Cannot validate kubelet config - no validator is available
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    a9a0daeedbefb4b9a014f4b258b9916403f7136bea20d28ec03aa926c41fcb3e
    [node1 192.168.56.108] MSG:
    secret/kubeadm-certs patched
    [node1 192.168.56.108] MSG:
    secret/kubeadm-certs patched
    [node1 192.168.56.108] MSG:
    secret/kubeadm-certs patched
    [node1 192.168.56.108] MSG:
    W1002 10:55:37.738867   10303 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1002 10:55:37.738964   10303 validation.go:28] Cannot validate kubelet config - no validator is available
    kubeadm join lb.kubesphere.local:6443 --token 025byf.2t2mvldlr9wm1ycx     --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
    [node1 192.168.56.108] MSG:
    NAME    STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
    node1   NotReady   master,worker   34s   v1.17.9   192.168.56.108   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.4
    INFO[10:55:38 EDT] Deploying network plugin ...                 
    [node1 192.168.56.108] MSG:
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node created
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    INFO[10:55:40 EDT] Joining nodes to cluster                     
    [node3 192.168.56.110] MSG:
    W1002 10:55:41.544472   12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    W1002 10:55:43.067290   12557 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    [node2 192.168.56.109] MSG:
    W1002 10:55:41.963749    8533 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    W1002 10:55:43.520053    8533 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    [node3 192.168.56.110] MSG:
    node/node3 labeled
    [node2 192.168.56.109] MSG:
    node/node2 labeled
    INFO[10:55:54 EDT] Congradulations! Installation is successful. 
    
    

    至此部署成功

    问题处理

    1 .创建集群时node节点下载镜像时hang住

    执行创建集群脚本过程中,node节点无法下载到镜像

    确认master节点上确认容器仓库容器是否正常运行后,在子节点上测试local registry是否部署成功

    [root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog
    curl: (7) Failed connect to dockerhub.kubekey.local:443; Connection refused
    

    由于镜像仓库在master节点,故关闭master的防火墙、核对node节点的hosts配置

    //修改hosts文件
    192.168.56.108 dockerhub.kubekey.local

    修改后重新测试连通:

    [root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog --cacert /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt 
    {"repositories":[]}
    

    返回内容为空,检查容器卷是否正常挂载,容器卷对应的宿主机目录空间是否足够,如果不够可以参考VirtualBox虚拟Centos磁盘文件扩容进行文件系统扩容。

    2.CPU数量错误

    ......
    [init] Using Kubernetes version: v1.17.9
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1  node=192.168.56.108
    WARN[10:40:18 EDT] Task failed ...                              
    WARN[10:40:18 EDT] error: interrupted by error                  
    Error: Failed to init kubernetes cluster: interrupted by error
    Usage:
      kk create cluster [flags]
    
    Flags:
      -f, --filename string          Path to a configuration file
      -h, --help                     help for cluster
          --skip-pull-images         Skip pre pull images
          --with-kubernetes string   Specify a supported version of kubernetes
          --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
      -y, --yes                      Skip pre-check of the installation
    
    Global Flags:
          --debug   Print detailed information (default true)
    
    Failed to init kubernetes cluster: interrupted by error
    
    

    在master上执行sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" 发现虚拟机cpu数不满足,修改vcpu数后重新创建集群即可。

    3.安装过程中TLS相关错误

    //禁用防火墙
    systemctl stop firewalld && systemctl disable firewalld
    
    //禁用selinux,临时修改
    
    setenforce 0
    
    //禁用selinux,永久修改,重启服务器后生效
    sed -i '7s/enforcing/disabled/' /etc/selinux/config
    

    4.其它等待超时问题

    在按照3中的步骤关闭了防火墙的情况下,还是出现莫名其妙的等待超时或者安装过程卡住问题,top发现kswap进程占用了大量的CPU资源,这时候想到是相关节点内存配置只有1G,调整大小到2G之后,重新部署。

  • 相关阅读:
    将文件夹压缩为jar包——JAVA小工具
    android json解析及简单例子(转载)
    Eclipse RCP中获取Plugin/Bundle中文件资源的绝对路径(转载)
    右键菜单的过滤和启动(转载)
    eclipse rcp应用程序重启
    使用PrefUtil设置全局配置
    模拟器屏幕大小
    Android实现下载图片并保存到SD卡中
    PhoneGap与Jquery Mobile组合开发android应用的配置
    android WebView结合jQuery mobile之基础:整合篇
  • 原文地址:https://www.cnblogs.com/elfcafe/p/13779619.html
Copyright © 2020-2023  润新知