• 学习kubernetes——搭建kubernetes集群


      学习k8s的最好方式是自己搭建一个k8s集群,并实际操作。按照官方教程,其实并不复杂,但是由于网络问题,很多软件和镜像无法下载,所以安装过程还是比较麻烦的。

      学习k8s并不需要集群环境,个人电脑就可以搭建一个单机集群来学习。下面简单介绍下过程,会跳过比较简单的步骤,重点说下需要注意的事项

    一、安装虚拟机和linux系统

      虚拟机可以使用hyper-v,virtualbox,和vmware。我用的是VirtualBox 6.1.0版本,下载地址是https://www.virtualbox.org/wiki/Downloads

      系统用的是CentOS-7-x86_64-Minimal-1908。学习用的话建议使用Minimal,下载和安装都很快。下载地址是 http://isoredirect.centos.org/centos/7/isos/x86_64/,选择一个速度较快的镜像地址下载。

      安装教程,网上很多,这里就不说了。需要建议的地方是,1、安装语言选择中文。2、软件选择最小安装,禁用KDUMP;配置网络连接

      注意:1、虚拟机配置中设置CPU个数为2或者以上

         2、内存设置为2G以上

         3、防火墙会给k8s集群带来一些问题,这里仅为学习用,可以直接关闭防火墙。systemctl stop firewalld & systemctl disable firewalld

         4、关闭Swap。执行swapoff -a可临时关闭,编辑/etc/fstab,注释掉包含swap的那一行即可,重启后可永久关闭

         5、关闭centos图形登录界面,systemctl set-default multi-user.target

         6、添加额外的网卡,仅主机(Host-Only)网络,这样就可以从主机之间访问虚拟机

    二、安装docker

      首先参考官方文档

      https://docs.docker.com/install/linux/docker-ce/centos/#prerequisites

      https://kubernetes.io/docs/setup/production-environment/container-runtimes/

    # Install Docker CE
    ## Set up the repository
    ### Install required packages.
    yum install yum-utils device-mapper-persistent-data lvm2
    
    ### Add Docker repository.
    ## 注意换成阿里云地址
    yum-config-manager --add-repo 
      https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    ## Install Docker CE.
    yum update && yum install 
      containerd.io-1.2.10 
      docker-ce-19.03.4 
      docker-ce-cli-19.03.4
    
    ## Create /etc/docker directory.
    mkdir /etc/docker
    
    # Setup daemon.
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ]
    }
    EOF
    
    mkdir -p /etc/systemd/system/docker.service.d
    
    # Restart Docker
    systemctl daemon-reload
    systemctl restart docker

      设置开机启动

    systemctl start docker & systemctl enable docker

      验证安装是否成功

    docker run hello-world

      结果如下

    Hello from Docker!
    This message shows that your installation appears to be working correctly.
    
    To generate this message, Docker took the following steps:
     1. The Docker client contacted the Docker daemon.
     2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
        (amd64)
     3. The Docker daemon created a new container from that image which runs the
        executable that produces the output you are currently reading.
     4. The Docker daemon streamed that output to the Docker client, which sent it
        to your terminal.
    
    To try something more ambitious, you can run an Ubuntu container with:
     $ docker run -it ubuntu bash
    
    Share images, automate workflows, and more with a free Docker ID:
     https://hub.docker.com/
    
    For more examples and ideas, visit:
     https://docs.docker.com/get-started/

    三、安装Kubernetes

      官方文档https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

      注意:需要改成国内的镜像地址

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    # Set SELinux in permissive mode (effectively disabling it)
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    
    systemctl enable --now kubelet
    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system

      设置自启动

    systemctl enable kubelet && systemctl start kubelet

    四、配置K8S单机集群

      这里使用Calico方案来部署k8s单机集群

      官方文档:https://docs.projectcalico.org/v3.11/getting-started/kubernetes/

      初始换环境,并下载安装k8s镜像

    kubeadm init --pod-network-cidr=192.168.0.0/16

      注意ip地址根据实际情况更改

      由于这一步需要下载k8s的docker镜像,国内不用代理的话基本上是下载不下来的。所以这里会出错,出错的原因就是镜像pull失败。会出现类似下面的错误信息

    W1229 11:23:22.589295    1688 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    W1229 11:23:22.590166    1688 version.go:102] falling back to the local client version: v1.17.0
    W1229 11:23:22.590472    1688 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1229 11:23:22.590492    1688 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.0
    [preflight] Running pre-flight checks
            [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher

      

      执行这个解决防火墙警告

      

    firewall-cmd --permanent --add-port=6443/tcp && sudo firewall-cmd --permanent --add-port=10250/tcp && sudo firewall-cmd --reload

      失败信息中给出了pull失败的镜像地址。github上有人已经下载所有的镜像并上传到国内地址,可以从那下载。

      github地址:https://github.com/anjia0532/gcr.io_mirror

      镜像地址转换规则

      

    gcr.io/namespace/image_name:image_tag 
    #eq
    gcr.azk8s.cn/namespace/image_name:image_tag 
    
    # special
    k8s.gcr.io/{image}/{tag} <==> gcr.io/google-containers/{image}/{tag} <==> gcr.azk8s.cn/namespace/image_name:image_tag 

      例如上面初始话下载失败的镜像地址是k8s.gcr.io/kube-apiserver:v1.17.0。

    k8s.gcr.io/kube-apiserver:v1.17.0
    转换为
    gcr.azk8s.cn/google-containers/kube-apiserver:v1.17.0

      下载镜像

    docker pull gcr.azk8s.cn/google-containers/kube-controller-manager:v1.17.0
    docker pull gcr.azk8s.cn/google-containers/kube-scheduler:v1.17.0
    docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.17.0
    docker pull gcr.azk8s.cn/google-containers/pause:3.1
    docker pull gcr.azk8s.cn/google-containers/etcd:3.4.3-0
    docker pull gcr.azk8s.cn/google-containers/coredns:1.6.5

      由于,kubeadm init拉取的镜像地址是官方的地址,因此我们需要打对应的tag

    docker tag gcr.azk8s.cn/google-containers/kube-apiserver:v1.17.0 k8s.gcr.io/kube-apiserver:v1.17.0
    docker tag gcr.azk8s.cn/google-containers/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0
    docker tag gcr.azk8s.cn/google-containers/kube-scheduler:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0
    docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0
    docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag gcr.azk8s.cn/google-containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
    docker tag gcr.azk8s.cn/google-containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5

      同样的方式完成所有镜像的下载。

      如果有多台服务器可以进行同样的操作。

      设置主机域名

      编辑/etc/hostname,将hostname修改为k8s-node1
      编辑/etc/hosts,追加内容 IP k8s-node1

      然后在主节点再次执行初始化,如果失败,可以先执行kubeadm reset

    kubeadm init --pod-network-cidr=192.168.0.0/16

    ## 如果需要多个节点
    kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.104

      继续执行

      

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

      安装Calico

    kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

      这一步也需要下载相应的calico镜像,也可能下载失败,大家可以去上面的yaml文件中,找到需要的镜像,然后搜索下载方式,这里就不说了。

      验证是否成功

      

    watch kubectl get pods --all-namespaces

      结果如下就表示成功了

      

    NAMESPACE    NAME                                       READY  STATUS   RESTARTS  AGE
    kube-system  calico-kube-controllers-6ff88bf6d4-tgtzb   1/1    Running  0         2m45s
    kube-system  calico-node-24h85                          1/1    Running  0         2m43s
    kube-system  coredns-846jhw23g9-9af73                   1/1    Running  0         4m5s
    kube-system  coredns-846jhw23g9-hmswk                   1/1    Running  0         4m5s
    kube-system  etcd-jbaker-1                              1/1    Running  0         6m22s
    kube-system  kube-apiserver-jbaker-1                    1/1    Running  0         6m12s
    kube-system  kube-controller-manager-jbaker-1           1/1    Running  0         6m16s
    kube-system  kube-proxy-8fzp2                           1/1    Running  0         5m16s
    kube-system  kube-scheduler-jbaker-1                    1/1    Running  0         5m41s

      如果calico-node出现ErrorImagePull等状态,就表示这个镜像没有下载成功,需要自己手动去国内镜像地址去下载,名称和版本号见https://docs.projectcalico.org/v3.11/manifests/calico.yaml

      配置master节点为work节点

    kubectl taint nodes --all node-role.kubernetes.io/master-

      结果如下

    node/<your-hostname> untainted

      最后运行

      

    kubectl get nodes -o wide

      出现类型如下结果就表示成功了

    NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
    <your-hostname>   Ready    master   52m   v1.12.2   10.128.0.28   <none>        Ubuntu 18.04.1 LTS   4.15.0-1023-gcp   docker://18.6.1

      

  • 相关阅读:
    global mapper合并多个tif影像
    arcgis 10.2 licence manager无法启动
    Error C2079 'CMFCPropertySheet::m_wndOutlookBar' uses undefined class 'CMFCOutlookBar'
    家里的技嘉B360主板win10 uefi系统安装
    vc 6.0项目转为vs 2017项目遇到 的问题
    PPT学习笔记
    git拉取分支
    将本地源码推向gitee码云
    java反编译工具使用记录
    node.js install and cecium apply
  • 原文地址:https://www.cnblogs.com/lilinwei340/p/12099217.html
Copyright © 2020-2023  润新知