• k8s集群安装(单节点)


    一、环境说明

    1.1、主机清单

    节点 IP 主机名 安装软件 备注
    Master 10.199.142.31 k8sm1  etcd kube-apiserver kube-controller-manager kube-scheduler kubectl安装在此机器
    Master 10.199.142.32 k8sm2  etcd 集群安装用,单节点不用
    Master 10.199.142.33 k8sm3  etcd 集群安装用,单节点不用
    Node 10.199.142.34 k8sn1  docker kubelet kube-proxy  
    Node 10.199.142.35 k8sn2  docker kubelet kube-proxy  

    1.2、操作路径

    路径 说明 备注
    /root/k8s/ 放置安装时文件和脚本
    /opt/etcd/{cfg,bin,ssl,logs}

    etcd的安装位置

    cfg是配置目录

    bin是执行目录

    ssl是证书目录

    logs是日志目录

    /opt/k8s/{cfg,bin,ssl,logs}

    kubernetes的安装位置

    1.3、资源下载

    • cfssl:https://pkg.cfssl.org/
    # 本次安装下载链接(版本 1.2):
    https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    • etcd:https://github.com/etcd-io/etcd/releases
    # 本次安装下载链接(版本 3.4.15):
    https://github-releases.githubusercontent.com/11225014/f051cf00-7842-11eb-99a4-97cc3ddb6816?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210317%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210317T040208Z&X-Amz-Expires=300&X-Amz-Signature=e9bd4970e97c34547807d8616d8b71e51121fa589707fbce4ac9ba71e1656cec&X-Amz-SignedHeaders=host&actor_id=16606154&key_id=0&repo_id=11225014&response-content-disposition=attachment%3B%20filename%3Detcd-v3.4.15-linux-amd64.tar&response-content-type=application%2Foctet-stream
    • k8s:https://github.com/kubernetes/kubernetes/releases     (tip:点击对应版本的CHANGELOG)
    # 本次安装下载链接(版本 1.20.4):
    https://storage.googleapis.com/kubernetes-release/release/v1.20.4/kubernetes-server-linux-amd64.tar.gz

    下载文件汇总,默认放在部署服务器的 /root/k8s/ 目录下:

    cfssl_linux-amd64
    cfssl-certinfo_linux-amd64
    cfssljson_linux-amd64
    etcd-v3.4.15-linux-amd64.tar
    kubernetes-server-linux-amd64.tar.gz

    二、主机准备工作

    (没有特别说明,所有节点主机都执行!)

    2.1、设置主机名

    (在对应主机上执行对应命令)

    hostnamectl set-hostname k8sm1
    hostnamectl set-hostname k8sm2
    hostnamectl set-hostname k8sm3
    hostnamectl set-hostname k8sn1
    hostnamectl set-hostname k8sn2

    2.2、修改hosts

    cat >> /etc/hosts <<EOF
    10.199.142.31 k8sm1
    10.199.142.32 k8sm2
    10.199.142.33 k8sm3
    10.199.142.34 k8sn1
    10.199.142.35 k8sn2
    EOF

    2.3、修改时区 同步时间

    timedatectl set-timezone Asia/Shanghai
    # 如果没有ntpdate命令,则安装:yum install ntpdate -y ntpdate pool.ntp.org

    2.4、关闭selinux

    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    setenforce 0

    2.5、关闭防火墙

    systemctl disable firewalld
    systemctl stop firewalld

    2.6、关闭swap

    swapoff -a
    vim /etc/fstab (注释或删除swap行,永久关闭)

    2.7、ssh互信

    (在k8sm1节点上,免密ssh任何一台机器!)

    # 只在Master节点上执行即可,主要用于分发文件和远程执行命令
    NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        ssh-copy-id -i ~/.ssh/id_rsa.pub root@${node_ip}
    done

    2.8、安装docker

    (在node节点上安装)

    # 下载软件源
    wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    # 安装docker-ce,也可下载指定的docekr-ce的docker (低版本的docker 无法支持高版本的kubernetes)
    yum -y install docker-ce
    # 或指定版本安装(选择一种即可)
    yum -y install docker-ce-18.09.1-3.el7
    
    # 设置Docker镜像加速器
    mkdir -p /etc/docker/
    cat > /etc/docker/daemon.json <<EOF
    {
     "exec-opts":["native.cgroupdriver=systemd"],
     "registry-mirrors": ["http://hub-mirror.c.163.com"]
    }
    EOF
    
    # 开启docker,并开机自启动
    systemctl start docker & systemctl enable docker

    2.9、将桥接的IPv4流量传递到iptables的链

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.ipv4.ip_nonlocal_bind = 1    
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_port_range = 10000 65000
    fs.file-max = 2000000
    vm.swappiness = 0
    EOF
    
    # 加载br_netfilter模块
    modprobe br_netfilter
    
    # 查看是否加载
    lsmod | grep br_netfilter
    
    # 生效
    sysctl --system

    2.10、建安装文件夹

     (在k8sm1节点上执行)

    # 所有节点建立k8s、安装目录
    NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p /root/k8s/ssl && mkdir -p /opt/k8s/{cfg,bin,ssl,logs}"
    done
    
    # Master节点建立etcd安装目录
    NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p /opt/etcd/{cfg,bin,ssl}"
    done

    三、创建CA证书、密钥

    3.1、安装证书制作工具

    安装脚本(/root/k8s/install_cfssl.sh),内容如下:

    # install_cfssl.sh
    # cfssl:生成证书工具
    # cfssljson:通过传入json文件生成证书
    # cfssl-certinfo:查看证书信息
    
    curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
    curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
    curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
    
    chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
    
    echo ">>> 安装结果"
    ls -trl /usr/local/bin/cfssl*

    安装cfssl:

    [root@k8sm1 k8s]# sh install_cfssl.sh

    3.2、制作根证书

    3.2.1、创建CA证书配置文件

    创建默认文件:

    [root@k8sm1 ssl]# cfssl print-defaults config > ca-config.json

    默认创建出的文件内容(需要修改):

    View Code

    修改后内容:

    # 终版 /root/k8s/ssl/ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ],
            "expiry": "87600h"
          }
        }
      }
    }
    • config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
    • signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
    • server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
    • client auth: 表示server 可以用该CA 对client 提供的证书进行验证;

    3.2.2、创建CA证书签名请求文件

     创建默认文件:

    [root@k8sm1 k8s]# cfssl print-defaults csr > ca-csr.json

    默认创建出的文件内容(需要修改):

    View Code

    修改后内容:

    # 终版 /root/k8s/ssl/ca-csr.json 
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
              "C": "CN",
              "ST": "BeiJing",
              "L": "BeiJing",
              "O": "k8s",
              "OU": "System"
            }
        ]
    }
    • CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法;
    • O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);

    3.2.3、生成证书(ca.pem)和秘钥(ca-key.pem)

    [root@k8sm1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

    结果:

    上面生成的证书文件存放在k8sm1的 /root/k8s/ssl

    -rw-r--r-- 1 root root  292 3月  17 20:40 ca-config.json
    -rw-r--r-- 1 root root  254 3月  17 20:40 ca-csr.json
    # ca.csr是一个签署请求 -rw-r--r-- 1 root root 1001 3月 17 20:42 ca.csr # ca-key.pem是ca的私钥 -rw------- 1 root root 1679 3月 17 20:42 ca-key.pem # ca.pem是CA证书 -rw-r--r-- 1 root root 1359 3月 17 20:42 ca.pem

    四、集群环境变量

    4.1、预设环境网段及变量

    环境变量脚本(/root/k8s/env.sh),内容如下:

    export PATH=/opt/k8s/bin:/opt/etcd/bin:$PATH
    
    # TLS Bootstrapping 使用的Token,
    # 可以使用命令生成,命令: head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    BOOTSTRAP_TOKEN="555898ed9c5d0ba16cf76fec2c8f94ef"
    
    # 服务网段(Service CIDR),集群内部使用IP:Port
    SERVICE_CIDR="10.200.0.0/16"
    # Pod 网段(Cluster CIDR)
    CLUSTER_CIDR="10.100.0.0/16"
    
    # 服务端口范围(NodePort Range)
    NODE_PORT_RANGE="30000-32766"
    
    # etcd 集群所有机器 IP
    ECTD_IPS="10.199.142.31,10.199.142.32,10.199.142.33"
    
    # etcd集群服务地址列表
    ETCD_ENDPOINTS="https://10.199.142.31:2379,https://10.199.142.32:2379,https://10.199.142.33:2379"
    
    # etcd 集群间通信的IP和端口
    ECTD_NODES="etcd01=https://10.199.142.31:2380,etcd02=https://10.199.142.32:2380,etcd03=https://10.199.142.33:2380"
    
    # flanneld 网络配置前缀
    FLANNEL_ETCD_PREFIX="/kubernetes/network"
    
    # kubernetes 服务IP(预先分配,一般为SERVICE_CIDR中的第一个IP)
    CLUSTER_KUBERNETES_SVC_IP="10.200.0.1"
    
    # 集群 DNS 服务IP(从SERVICE_CIDR 中预先分配)
    CLUSTER_DNS_SVC_IP="10.200.0.2"
    
    # 集群 DNS 域名
    CLUSTER_DNS_DOMAIN="cluster.local"
    
    # MASTER API Server 地址
    #MASTER_URL="k8s-api.virtual.local"
    MASTER_URL="10.199.142.31"

    4.2、分发环境变量配置

    (在k8sm1节点上执行)

    NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        scp /root/k8s/env.sh root@${node_ip}:/root/k8s/
    done

    五、部署etcd

    5.1、下载etcd

    [root@k8sm1 k8s]# tar -xvf etcd-v3.4.15-linux-amd64.tar
    [root@k8sm1 k8s]# cd etcd-v3.4.15-linux-amd64 [root@k8sm1 etcd-v3.4.15-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/

    5.2、创建etcd证书

    • 创建etcd证书签名请求文件

    cat > /root/k8s/ssl/etcd-csr.json <<EOF
    {
      "CN": "etcd",
      "hosts": [
        "127.0.0.1",
        "10.199.142.31",
        "10.199.142.32",
        "10.199.142.33"
      ],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF

    Tip:hosts 字段指定授权使用该证书的etcd节点集群IP。

    • 生成证书

    [root@k8sm1 ssl]# cd /root/k8s/ssl
    [root@k8sm1 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd 
    [root@k8sm1 ssl]# cp ca*.pem etcd*.pem /opt/etcd/ssl/

    5.3、编写etcd安装脚本

    etcd安装脚本(/root/k8s/install_etcd.sh),内容如下:

    #!/usr/bin/bash
    # example: ./install_etcd.sh etcd01 10.199.142.31
    
    # 参数,当前etcd的名字,IP
    ETCD_NAME=$1
    ETCD_IP=$2
    # etcd安装路径
    WORK_DIR=/opt/etcd
    # 数据存放路径
    DATA_DIR=/var/lib/etcd/default.etcd
    
    # 加载预设变量
    source /root/k8s/env.sh
    
    # 创建节点的配置文件模板
    cat >$WORK_DIR/cfg/etcd.conf <<EOF
    # [Member]
    ETCD_NAME="${ETCD_NAME}"
    ETCD_DATA_DIR="${DATA_DIR}"
    ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
    ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
    
    # [Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
    ETCD_INITIAL_CLUSTER="${ECTD_NODES}"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    EOF
    
    # 创建节点的启动脚本模板
    cat >/usr/lib/systemd/system/etcd.service <<EOF
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=${WORK_DIR}/cfg/etcd.conf
    ExecStart=${WORK_DIR}/bin/etcd \
    --cert-file=${WORK_DIR}/ssl/etcd.pem \
    --key-file=${WORK_DIR}/ssl/etcd-key.pem \
    --peer-cert-file=${WORK_DIR}/ssl/etcd.pem \
    --peer-key-file=${WORK_DIR}/ssl/etcd-key.pem \
    --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
    --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    # 重启服务,并设置开机自启
    systemctl daemon-reload
    systemctl enable etcd
    systemctl restart etcd
    • 指定etcd的工作目录和数据目录为/var/lib/etcd,需要在启动服务前创建这个目录;
    • 为了保证通信安全,需要指定etcd 的公私钥(cert-file和key-file)、Peers通信的公私钥和CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA 证书(trusted-ca-file);
    • --initial-cluster-state值为new时,--name的参数值必须位于--initial-cluster列表中;

    5.4、分发、启动etcd服务

    • 执行脚本(此时服务还未起好,会报错,因为是集群,检测不到其他节点)
    [root@hz-yf-xtax-it-199-142-31 k8s]# sh install_etcd.sh etcd01 10.199.142.31
    • 分发
    NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        scp -r /opt/etcd/ root@${node_ip}:/opt/
        scp /usr/lib/systemd/system/etcd.service root@${node_ip}:/usr/lib/systemd/system/
    done
    • 修改节点的配置(/opt/etcd/cfg/etcd.conf)

    把单节点的ip信息,换成此节点的IP

    # [Member]
    ETCD_NAME="etcd02"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://10.199.142.32:2380"
    ETCD_LISTEN_CLIENT_URLS="https://10.199.142.32:2379"
    
    # [Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.199.142.32:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://10.199.142.32:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://10.199.142.31:2380,etcd02=https://10.199.142.32:2380,etcd03=https://10.199.142.33:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"

    节点三同理

    • 启动etcd服务

    每台节点上,分别执行systemctl start etcd

    5.5、验证etcd服务结果:

    ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://10.199.142.31:2379,https://10.199.142.32:2379,https://10.199.142.33:2379" endpoint health --write-out=table

    结果:

    +----------------------------+--------+-------------+-------+
    |          ENDPOINT          | HEALTH |    TOOK     | ERROR |
    +----------------------------+--------+-------------+-------+
    | https://10.199.142.32:2379 |   true | 11.890704ms |       |
    | https://10.199.142.31:2379 |   true | 11.739482ms |       |
    | https://10.199.142.33:2379 |   true | 12.842723ms |       |
    +----------------------------+--------+-------------+-------+

    六、安装kubectl

    kubectl默认从~/.kube/config配置文件中获取访问kube-apiserver 地址、证书、用户名等信息,需要正确配置该文件才能正常使用kubectl命令。
    需要将下载的kubectl 二进制文件和生产的~/.kube/config配置文件拷贝到需要使用kubectl 命令的机器上。

    很多童鞋说这个地方不知道在哪个节点上执行,kubectl只是一个和kube-apiserver进行交互的一个命令行工具,所以你想安装到那个节点都行,master或者node任意节点都可以,比如你先在master节点上安装,这样你就可以在master节点使用kubectl命令行工具了,如果你想在node节点上使用(当然安装的过程肯定会用到的),你就把master上面的kubectl二进制文件和~/.kube/config文件拷贝到对应的node节点上就行了。

    6.1、下载kubectl

    [root@k8sm1 k8s]# tar -xvf kubernetes-server-linux-amd64.tar.gz
    [root@k8sm1 k8s]# cd kubernetes/server/bin/
    [root@k8sm1 bin]# cp kubectl /usr/local/bin/

    6.2、创建admin证书

    6.2.1、创建admin证书签名请求文件

    kubectl 与kube-apiserver 的安全端口通信,需要为安全通信提供TLS 证书和密钥。创建admin 证书签名请求:

    cat >/root/k8s/ssl/admin-csr.json <<EOF
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    • 后续kube-apiserver使用RBAC 对客户端(如kubelet、kube-proxy、Pod)请求进行授权;
    • kube-apiserver 预定义了一些RBAC 使用的RoleBindings,如cluster-admin 将Group system:masters与Role cluster-admin绑定,该Role 授予了调用kube-apiserver所有API 的权限;
    • O 指定了该证书的Group 为system:masters,kubectl使用该证书访问kube-apiserver时,由于证书被CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters,所以被授予访问所有API 的劝降;
    • hosts 属性值为空列表;

    6.2.2、生成证书

    [root@k8sm ssl]# cfssl gencert -ca=/root/k8s/ssl/ca.pem -ca-key=/root/k8s/ssl/ca-key.pem -config=/root/k8s/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 

    6.2.3、拷贝证书

    [root@k8sm ssl]# cp admin*.pem /opt/k8s/ssl/

    6.3、创建kubectl安装脚本

    kubectl安装脚本(/root/k8s/install_kubectl.sh),内容如下:

    # 加载预设变量
    source /root/k8s/env.sh
    
    # 默认APISERVER:6443
    KUBE_APISERVER="https://${MASTER_URL}:6443"
    
    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/k8s/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER}
    # 设置客户端认证参数
    kubectl config set-credentials admin \
      --client-certificate=/opt/k8s/ssl/admin.pem \
      --embed-certs=true \
      --client-key=/opt/k8s/ssl/admin-key.pem \
      --token=${BOOTSTRAP_TOKEN}
    # 设置上下文参数
    kubectl config set-context kubernetes \
      --cluster=kubernetes \
      --user=admin
    # 设置默认上下文
    kubectl config use-context kubernetes
    • admin.pem证书O 字段值为system:masters,kube-apiserver 预定义的 RoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限
    • 生成的kubeconfig 被保存到 ~/.kube/config 文件

    6.4、生成配置文件

    [root@k8sm k8s]# sh install_kubectl.sh

    Tip:如果分发kubectl到其他机器

    • 拷贝kubectl文件;
    • 将~/.kube/config文件拷贝到运行kubectl命令的机器的~/.kube/目录下去;

    kubectl get cs

    七、安装Master节点

    kubernetes master 节点包含的组件有:

    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager

    目前这3个组件需要部署到同一台机器上:(后面再部署高可用的master)

    • kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关;
    • 同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;

    master 节点与node 节点上的Pods 通过Pod 网络通信,所以需要在master 节点上部署Flannel 网络。

    7.1、下载二进制文件

    [root@k8sm1 ~]# cd /root/k8s/kubernetes/server/bin
    [root@k8sm1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler /opt/k8s/bin/

    7.2、创建kubernetes证书

    • 创建kubernetes证书签名请求文件

    cat > /root/k8s/ssl/kube-apiserver-csr.json <<EOF
    {
      "CN": "kubernetes",
      "hosts": [
        "127.0.0.1",
        "10.199.142.31",
        "10.199.142.32",
        "10.199.142.33",
        "10.199.142.34",
        "10.199.142.35",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
      ],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    • 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,所以上面分别指定了当前部署的 master 节点主机 IP 以及apiserver 负载的内部域名
    • 还需要添加 kube-apiserver 注册的名为 kubernetes 的服务 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 选项值指定的网段的第一个IP,如 “10.254.0.1”

    8.2.2、生成证书

    cfssl gencert -ca=/root/k8s/ssl/ca.pem   -ca-key=/root/k8s/ssl/ca-key.pem   -config=/root/k8s/ssl/ca-config.json   -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

    8.2.3、分发证书

    cp kube-apiserver*.pem /opt/k8s/ssl/

    8.3、配置和启动kube-apiserver

    8.3.1、制作token令牌

    创建kube-apiserver使用的客户端token 文件。kubelet 首次启动时向kube-apiserver 发送TLS Bootstrapping 请求,kube-apiserver 验证请求中的token是否与它配置的token.csv 一致,如果一致则自动为kubelet 生成证书和密钥。

    source /root/k8s/env.sh
    cat >/opt/k8s/cfg/token.csv <<EOF # 写入内容:序列号,用户名,id,角色 ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF

    8.3.2、创建kube-apiserver安装脚本

    kube-apiserver安装脚本(/root/k8s/install_kube-apiserver.sh),内容如下:

    #!/usr/bin/bash
    # example: ./install_kube-apiserver.sh 10.199.142.31
    NODE_IP=$1
    
    K8S_DIR=/opt/k8s
    ETCD_DIR=/opt/etcd
    
    source /root/k8s/env.sh
    
    cat >${K8S_DIR}/cfg/kube-apiserver.conf <<EOF
    KUBE_APISERVER_OPTS="--logtostderr=false \
    --v=2 \
    --log-dir=${K8S_DIR}/logs \
    --etcd-servers=${ETCD_ENDPOINTS} \
    --bind-address=${NODE_IP} \
    --secure-port=6443 \
    --advertise-address=${NODE_IP} \
    --allow-privileged=true \
    --service-cluster-ip-range=${SERVICE_CIDR} \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \
    --enable-bootstrap-token-auth=true \
    --token-auth-file=${K8S_DIR}/cfg/token.csv \
    --service-node-port-range=${NODE_PORT_RANGE} \
    --kubelet-client-certificate=${K8S_DIR}/ssl/kube-apiserver.pem \
    --kubelet-client-key=${K8S_DIR}/ssl/kube-apiserver-key.pem \
    --tls-cert-file=${K8S_DIR}/ssl/kube-apiserver.pem  \
    --tls-private-key-file=${K8S_DIR}/ssl/kube-apiserver-key.pem \
    --client-ca-file=${K8S_DIR}/ssl/ca.pem \
    --service-account-key-file=${K8S_DIR}/ssl/ca-key.pem \
    --service-account-issuer=api \
    --service-account-signing-key-file=${K8S_DIR}/ssl/kube-apiserver-key.pem \
    --etcd-cafile=${ETCD_DIR}/ssl/ca.pem \
    --etcd-certfile=${ETCD_DIR}/ssl/etcd.pem \
    --etcd-keyfile=${ETCD_DIR}/ssl/etcd-key.pem \
    --requestheader-client-ca-file=${K8S_DIR}/ssl/ca.pem \
    --proxy-client-cert-file=${K8S_DIR}/ssl/kube-apiserver.pem \
    --proxy-client-key-file=${K8S_DIR}/ssl/kube-apiserver-key.pem \
    --requestheader-allowed-names=kubernetes \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --enable-aggregator-routing=true \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=${K8S_DIR}/logs/k8s-audit.log"
    EOF
    
    # 创建节点的启动脚本模板
    cat >/usr/lib/systemd/system/kube-apiserver.service <<EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=${K8S_DIR}/cfg/kube-apiserver.conf
    ExecStart=${K8S_DIR}/bin/kube-apiserver \$KUBE_APISERVER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    # 重启服务,并设置开机自启
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl restart kube-apiserver

    8.3.3、启动apiserver

    [root@k8sm k8s]# sh install_kube-apiserver.sh 10.199.142.31

    8.3.4、 验证服务

    systemctl status kube-apiserver

    8.4、配置和启动kube-controller-manager

    生成证书

    cat > /root/k8s/ssl/kube-controller-manager-csr.json << EOF
    {
      "CN": "system:kube-controller-manager",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing", 
          "ST": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    
    cd /root/k8s/ssl
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
    
    cp kube-controller-manager*.pem /opt/k8s/ssl/
    K8S_DIR=/opt/k8s
    
    source /root/k8s/env.sh
    
    KUBE_CONFIG="${K8S_DIR}/cfg/kube-controller-manager.kubeconfig"
    KUBE_APISERVER="https://${MASTER_URL}:6443"
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=${K8S_DIR}/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-credentials kube-controller-manager \
      --client-certificate=${K8S_DIR}/ssl/kube-controller-manager.pem \
      --client-key=${K8S_DIR}/ssl/kube-controller-manager-key.pem \
      --embed-certs=true \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-controller-manager \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

    8.4.1、创建kube-controller-manager安装脚本

    kube-controller-manager安装脚本(/root/k8s/install_kube-controller-manager.sh),内容如下:

    #!/usr/bin/bash
    # example: ./install_kube-controller-manager.sh
    K8S_DIR=/opt/k8s
    
    source /root/k8s/env.sh
    
    cat > ${K8S_DIR}/cfg/kube-controller-manager.conf << EOF
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=${K8S_DIR}/logs \\
    --leader-elect=true \\
    --kubeconfig=${K8S_DIR}/cfg/kube-controller-manager.kubeconfig \\
    --bind-address=127.0.0.1 \\
    --allocate-node-cidrs=true \\
    --cluster-cidr=${CLUSTER_CIDR} \\
    --service-cluster-ip-range=${SERVICE_CIDR} \\
    --cluster-signing-cert-file=${K8S_DIR}/ssl/ca.pem \\
    --cluster-signing-key-file=${K8S_DIR}/ssl/ca-key.pem  \\
    --root-ca-file=${K8S_DIR}/ssl/ca.pem \\
    --service-account-private-key-file=${K8S_DIR}/ssl/ca-key.pem \\
    --cluster-signing-duration=87600h0m0s"
    EOF
    
    cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=${K8S_DIR}/cfg/kube-controller-manager.conf
    ExecStart=${K8S_DIR}/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl start kube-controller-manager
    systemctl enable kube-controller-manager
    • --address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器
    • --master=http://${MASTER_URL}:8080:使用http(非安全端口)与 kube-apiserver 通信,需要下面的haproxy安装成功后才能去掉8080端口。
    • --cluster-cidr 指定 Cluster 中 Pod 的 CIDR 范围,该网段在各 Node 间必须路由可达(flanneld保证)
    • --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致
    • --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥
    • --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件
    • --leader-elect=true 部署多台机器组成的 master 集群时选举产生一处于工作状态的 kube-controller-manager 进程

    8.4.2、启动kube-controller-manager

    [root@k8sm k8s]# sh install_kube-controller-manager.sh

    8.4.3、验证服务

    systemctl status kube-controller-manager

    8.5、配置和启动kube-scheduler

    cat > /root/k8s/ssl/kube-scheduler-csr.json << EOF
    {
      "CN": "system:kube-scheduler",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    
    cd /root/k8s/ssl
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
    
    cp kube-scheduler*.pem /opt/k8s/ssl/
    K8S_DIR=/opt/k8s
    source /root/k8s/env.sh
    
    KUBE_CONFIG="${K8S_DIR}/cfg/kube-scheduler.kubeconfig"
    KUBE_APISERVER="https://${MASTER_URL}:6443"
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=${K8S_DIR}/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-credentials kube-scheduler \
      --client-certificate=${K8S_DIR}/ssl/kube-scheduler.pem \
      --client-key=${K8S_DIR}/ssl/kube-scheduler-key.pem \
      --embed-certs=true \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-scheduler \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

    8.5.1、创建kube-scheduler安装脚本

    kube-scheduler安装脚本(/root/k8s/install_kube-scheduler.sh),内容如下:

    #!/usr/bin/bash
    # example: ./install_kube-scheduler.sh
    K8S_DIR=/opt/k8s
    source /root/k8s/env.sh
    
    cat > ${K8S_DIR}/cfg/kube-scheduler.conf << EOF
    KUBE_SCHEDULER_OPTS="--logtostderr=false \
    --v=2 \
    --log-dir=${K8S_DIR}/logs \
    --leader-elect \
    --kubeconfig=${K8S_DIR}/cfg/kube-scheduler.kubeconfig \
    --bind-address=127.0.0.1"
    EOF
    
    cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=${K8S_DIR}/cfg/kube-scheduler.conf
    ExecStart=${K8S_DIR}/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl start kube-scheduler
    systemctl enable kube-scheduler
    • --address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器
    • --master=http://${MASTER_URL}:8080:使用http(非安全端口)与 kube-apiserver 通信,需要下面的haproxy启动成功后才能去掉8080端口
    • --leader-elect=true 部署多台机器组成的 master 集群时选举产生一处于工作状态的 kube-controller-manager 进程

    8.5.2、启动kube-scheduler

    sh install_kube-scheduler.sh 

    8.5.3、验证kube-scheduler服务

    systemctl status kube-scheduler

    8.6、验证Master节点

    [root@k8sm k8s]# /opt/k8s/bin/kubectl get componentstatuses

    结果:

    View Code

    9.3.3、创建bootstrap角色

    创建 bootstrap角色赋予权限,用于连接 apiserver请求签名

    kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

    八、部署Node节点

    9.1、下载二进制文件

    NODE_IPS=("192.168.56.108" "192.168.56.109")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        scp /root/k8s/kubernetes/server/bin/kubelet root@${node_ip}:/opt/k8s/bin/
        scp /root/k8s/kubernetes/server/bin/kube-proxy root@${node_ip}:/opt/k8s/bin/
    done

    9.2、配置docker

    9.2.1、修改systemd unit文件,暂时不修改

    vim /usr/lib/systemd/system/docker.service
    
    # 原内容
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    
    # 修改后内容:
    EnvironmentFile=-/run/flannel/docker
    ExecStart=/usr/bin/dockerd --log-level=info $DOCKER_NETWORK_OPTIONS

    9.2.2、重启docker

    systemctl daemon-reload
    systemctl restart docker
    
    

    9.3、安装kubelet

    9.3.1、配置kubelet

    K8S_DIR=/opt/k8s
    source /root/k8s/env.sh
    
    KUBE_CONFIG="/root/k8s/bootstrap.kubeconfig"
    KUBE_APISERVER="https://${MASTER_URL}:6443"
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=${K8S_DIR}/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-credentials "kubelet-bootstrap" \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-context default \
      --cluster=kubernetes \
      --user="kubelet-bootstrap" \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

    mv bootstrap.kubeconfig /root/k8s/

    scp /root/k8s/bootstrap.kubeconfig root@10.199.142.34:/opt/k8s/cfg/

    scp /root/k8s/ssl/ca*.pem root@10.199.142.34:/opt/k8s/ssl/

    配置文件(/root/k8s/gen_bootstrap-kubeconfig.sh)下面的,被上面的替换掉了

    #!/usr/bin/bash
    
    source /root/k8s/env.sh
    KUBE_APISERVER="https://${MASTER_URL}:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig NODE_IPS=("192.168.56.108" "192.168.56.109") for node_ip in ${NODE_IPS[@]};do echo ">>> ${node_ip}" scp /root/k8s/bootstrap.kubeconfig root@${node_ip}:/opt/k8s/cfg/ done

    9.3.2、生成配置文件

    [root@k8sm k8s]# sh gen_bootstrap-kubeconfig.sh

     

    9.3.4、创建kubelet安装脚本

    kubelet脚本(/root/k8s/install_kubelet.sh),内容如下:

    #!/usr/bin/bash
    
    NODE_NAME=$1
    
    K8S_DIR=/opt/k8s
    source /root/k8s/env.sh
    
    cat >${K8S_DIR}/cfg/kubelet.conf <<EOF
    KUBELET_OPTS="--logtostderr=false \
    --v=2 \
    --log-dir=${K8S_DIR}/logs \
    --hostname-override=${NODE_NAME} \
    --network-plugin=cni \
    --kubeconfig=${K8S_DIR}/cfg/kubelet.kubeconfig \
    --bootstrap-kubeconfig=${K8S_DIR}/cfg/bootstrap.kubeconfig \
    --config=${K8S_DIR}/cfg/kubelet-config.yml \
    --cert-dir=${K8S_DIR}/ssl \
    --pod-infra-container-image=k8s.gcr.io/pause:3.2"
    EOF
    
    cat >${K8S_DIR}/cfg/kubelet-config.yml <<EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 0.0.0.0
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: systemd
    clusterDNS:
      - 
    clusterDomain: 
    failSwapOn: false
    
    # 身份验证
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: ${K8S_DIR}/ssl/ca.pem
    
    # 授权
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    
    # Node 资源保留
    evictionHard:
      imagefs.available: 15%
      memory.available: 1G
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionPressureTransitionPeriod: 5m0s
    
    # 镜像删除策略
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    imageMinimumGCAge: 2m0s
    
    # 旋转证书
    rotateCertificates: true # 旋转kubelet client 证书
    featureGates:
      RotateKubeletServerCertificate: true
      RotateKubeletClientCertificate: true
    
    maxOpenFiles: 1000000
    maxPods: 110
    EOF
    
    cat >/usr/lib/systemd/system/kubelet.service <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=-${K8S_DIR}/cfg/kubelet.conf
    ExecStart=${K8S_DIR}/bin/kubelet \$KUBELET_OPTS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kubelet 
    systemctl restart kubelet

     认证节点

    kubectl get csr

    NAME AGE SIGNERNAME REQUESTOR CONDITION
    csr-7h7nd 49s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
    csr-xfk22 18s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

    [root@hz-yf-xtax-it-199-142-31 cfg]# kubectl certificate approve csr-xfk22
    certificatesigningrequest.certificates.k8s.io/csr-xfk22 approved

    kubectl get node

    NAME STATUS ROLES AGE VERSION
    k8sn1 NotReady <none> 11s v1.20.4

    9.4、安装kube-proxy

    9.4.1、创建kube-proxy证书

    cat > /root/k8s/ssl/kube-proxy-csr.json << EOF
    {
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "L": "BeiJing",
    "ST": "BeiJing",
    "O": "k8s",
    "OU": "System"
    }
    ]
    }
    EOF

    9.4.3、生成证书

    cfssl gencert -ca=/root/k8s/ssl/ca.pem -ca-key=/root/k8s/ssl/ca-key.pem -config=/root/k8s/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 

    9.4.4、分发证书

    scp /root/k8s/ssl/kube-proxy*.pem root@10.199.142.34:/opt/k8s/ssl/

    9.4.5、配置kube-proxy

    /root/k8s/gen_kube-proxyconfig.sh,内容如下:

    xin的

    source /root/k8s/env.sh
    
    KUBE_CONFIG="/root/k8s/kube-proxy.kubeconfig"
    KUBE_APISERVER="https://${MASTER_URL}:6443"
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=${K8S_DIR}/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-credentials kube-proxy \
      --client-certificate=${K8S_DIR}/ssl/kube-proxy.pem \
      --client-key=${K8S_DIR}/ssl/kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=${KUBE_CONFIG}
    kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

    scp /root/k8s/kube-proxy.kubeconfig root@10.199.142.34:/opt/k8s/cfg/

    #!/usr/bin/bash
    
    source /root/k8s/env.sh
    
    KUBE_APISERVER="https://${MASTER_URL}:6443"
    
    # 设置集群参数
    kubectl config set-cluster kubernetes \
    --certificate-authority=/opt/k8s/ssl/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=kube-proxy.kubeconfig
    
    # 设置客户端认证参数
    kubectl config set-credentials kube-proxy\
    --client-certificate=/root/k8s/ssl/kube-proxy.pem \
    --client-key=/root/k8s/ssl/kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig
    
    # 设置上下文参数
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig
    
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    
    NODE_IPS=("192.168.56.108" "192.168.56.109")
    for node_ip in ${NODE_IPS[@]};do
        echo ">>> ${node_ip}"
        scp /root/k8s/kube-proxy.kubeconfig root@${node_ip}:/opt/k8s/cfg/
    done

    9.4.7、kube-proxy安装脚本

     /root/k8s/install_kube-proxy.sh,内容如下:

    #!/usr/bin/bash
    
    K8S_DIR=/opt/k8s
    source /root/k8s/env.sh
    
    cat > ${K8S_DIR}/cfg/kube-proxy.conf << EOF
    KUBE_PROXY_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=${K8S_DIR}/logs \\
    --config=${K8S_DIR}/cfg/kube-proxy-config.yml"
    EOF
    
    cat > ${K8S_DIR}/cfg/kube-proxy-config.yml << EOF
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    metricsBindAddress: 0.0.0.0:10249
    clientConnection:
      kubeconfig: ${K8S_DIR}/cfg/kube-proxy.kubeconfig
    hostnameOverride: k8s-master1
    clusterCIDR: ${CLUSTER_CIDR}
    EOF
    
    cat > /usr/lib/systemd/system/kube-proxy.service << EOF
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    EnvironmentFile=${K8S_DIR}/cfg/kube-proxy.conf
    ExecStart=${K8S_DIR}/bin/kube-proxy \$KUBE_PROXY_OPTS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl start kube-proxy
    systemctl enable kube-proxy

     

    九、安装calico

     curl https://docs.projectcalico.org/manifests/calico.yaml -O

    diff calico.yaml calico.yaml.default 
    3660,3662c3660
    <               value: "Never"
    <             - name: IP_AUTODETECTION_METHOD
    <               value: "interface=en.*"
    ---
    >               value: "Always"
    3687,3688c3685,3686
    <             - name: CALICO_IPV4POOL_CIDR
    <               value: "10.100.0.0/16"
    ---
    >             # - name: CALICO_IPV4POOL_CIDR
    >             #   value: "192.168.0.0/16"
    3768c3766
    <             path: /opt/k8s/bin
    ---
    >             path: /opt/cni/bin

    十、部署kubedns插件

    十一、部署dashboard插件

    问题:
    [root@k8sm-1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/k8s/ssl/ca.pem --cert-file=/opt/k8s/ssl/flanneld.pem --key-file=/opt/k8s/ssl/flanneld-key.pem --endpoints=${ETCD_ENDPOINTS} get ${FLANNEL_ETCD_PREFIX}/config
    Error: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint
    解决办法:
    https://blog.csdn.net/sonsunny/article/details/105226586

    end

    hostnamectl set-hostname k8sm1hostnamectl set-hostname k8sm2hostnamectl set-hostname k8sm3hostnamectl set-hostname k8sn1hostnamectl set-hostname k8sn2

    NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")for node_ip in ${NODE_IPS[@]};do    echo ">>> ${node_ip}"    scp /root/k8s/env.sh root@${node_ip}:/root/k8s/done

  • 相关阅读:
    真的要努力了
    实事求是
    要努力了
    新征程,新目标
    真的要放弃了吗
    集中力量 主攻文科
    May the force be with me.
    记录级排名
    Android开发过程中git、repo、adb、grep等指令的使用
    Ubuntu环境变量设置
  • 原文地址:https://www.cnblogs.com/wsongl/p/14723903.html
Copyright © 2020-2023  润新知