• kubernetes1.7.6 ha高可用部署


    写在前面: 

    1. 该文章部署方式为二进制部署。

    2. 版本信息 k8s 1.7.6,etcd 3.2.9

    3. 高可用部分 etcd做高可用集群、kube-apiserver 为无状态服务使用haproxy做负载均衡,kube-controller-manager和kube-scheduler使用自身的选举功能,无需考虑高可用问题。

    环境说明:

    本环境中网络说明,宿主机及node网段为192.168.1.x/24,service cluster网段为172.16.x.x/16 ,pod网段为172.17.x.x/16,下面会用到。

    主机名 ip地址 服务 备注
    master1 192.168.1.18 etcd flanneld kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived VIP 192.168.1.24作为apiserver的浮动ip
    master2 192.168.1.19 etcd flanneld kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived
    master3 192.168.1.20 etcd flanneld kube-apiserver kube-controller-manager kube-scheduler  
    node1 192.168.1.21 flanneld docker kube-proxy kubelet harbor  
    node2 192.168.1.22 flanneld docker kube-proxy kubelet harbor  
    node3 192.168.1.23 flanneld docker kube-proxy kubelet harbor  

     

    步骤:

    1. 证书及kubeconfig文件生成(该操作在任何一台master上执行即可)

     kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密,本文档使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书;

    生成的 CA 证书和秘钥文件如下:

    • ca-key.pem
    • ca.pem
    • kubernetes-key.pem
    • kubernetes.pem
    • kube-proxy.pem
    • kube-proxy-key.pem
    • admin.pem
    • admin-key.pem

    使用证书的组件如下:

    • etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
    • kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
    • kubelet:使用 ca.pem;
    • kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
    • kubectl:使用 ca.pem、admin-key.pem、admin.pem;

    证书生成需要使用cfssl,下面安装cfssl:

    [root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    [root@k8s-master01 ~]# chmod +x cfssl_linux-amd64
    [root@k8s-master01 ~]# mv cfssl_linux-amd64 /usr/bin/cfssl
    [root@k8s
    -master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master01 ~]# chmod +x cfssljson_linux-amd64 [root@k8s-master01 ~]# mv cfssljson_linux-amd64 /usr/bin/cfssljson
    [root@k8s
    -master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@k8s-master01 ~]# chmod +x cfssl-certinfo_linux-amd64 [root@k8s-master01 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

    创建 CA (Certificate Authority)

    创建 CA 配置文件

    [root@k8s-master01 ~]# mkdir /opt/ssl
    [root@k8s-master01 ~]# cd /opt/ssl
    [root@k8s-master01 ~]# cfssl print-defaults config > config.json
    [root@k8s-master01 ~]# cfssl print-defaults csr > csr.json
    [root@k8s-master01 ~]# cat ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "8760h"
        },
        "profiles": {
          "kubernetes": {
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ],
            "expiry": "8760h"
          }
        }
      }
    }

    创建 CA 证书签名请求

    [root@k8s-master01 ~]# cat ca-csr.json
    {
      "CN": "kubernetes",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }

    生成 CA 证书和私钥

    [root@k8s-master01 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    [root@k8s-master01 ~]# ls ca*
    ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

    创建 kubernetes 证书

    创建 kubernetes 证书签名请求

    [root@k8s-master01 ~]# cat kubernetes-csr.json
    {
        "CN": "kubernetes",
        "hosts": [
          "127.0.0.1",
          "192.168.1.18",
          "192.168.1.19",
          "192.168.1.20",
          "192.168.1.21",
          "192.168.1.22",
          "192.168.1.23",
          "172.16.0.1",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "BeiJing",
                "L": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    • 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续被 etcd 集群和 kubernetes master集群使用,所以上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP 和 kubernetes 服务的服务 IP(一般是kue-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 本环境中的172.16.0.1。

    生成 kubernetes 证书和私钥

    $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
    $ ls kuberntes*
    kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

    创建 admin 证书

    创建 admin 证书签名请求

    $ cat admin-csr.json
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }

    生成 admin 证书和私钥

    $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
    $ ls admin*
    admin.csr  admin-csr.json  admin-key.pem  admin.pem

    创建 kube-proxy 证书

    创建 kube-proxy 证书签名请求

    $ cat kube-proxy-csr.json
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }

    生成 kube-proxy 客户端证书和私钥

    $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
    $ ls kube-proxy*
    kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

    分发证书

    将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用;

    [root@k8s-master01 ~]# cd /opt/ssl/
    [root@k8s-master01 ssl]# mkdir -p /etc/kubernetes/ssl/[root@k8s-master01 ssl]# cp * /etc/kubernetes/ssl/
    [root@k8s-master01 ssl]# for i in `seq 19 23`; do  scp -r /etc/kubernetes/ 192.168.1.$i:/etc/;done

    创建 kubeconfig 文件

    配置kubectl的kubeconfig文件

    文件会生产在 /root/.kube/config

    #配置 kubernetes 集群
    [root@k8s-master01 ~]# kubectl config set-cluster kubernetes 
    >   --certificate-authority=/etc/kubernetes/ssl/ca.pem 
    >   --embed-certs=true 
    >   --server=https://192.168.1.24:6444
    Cluster "kubernetes" set.
    
    #配置 客户端认证
    [root@k8s-master01 ~]# kubectl config set-credentials admin 
    >   --client-certificate=/etc/kubernetes/ssl/admin.pem 
    >   --embed-certs=true 
    >   --client-key=/etc/kubernetes/ssl/admin-key.pem
    User "admin" set.
    [root@k8s-master01 ~]# kubectl config set-context kubernetes 
    >   --cluster=kubernetes 
    >   --user=admin
    Context "kubernetes" created.
    [root@k8s-master01 ~]# kubectl config use-context kubernetes
    Switched to context "kubernetes".
    
    #分发文件
    [root@k8s-master01 ~]# for i in `seq 19 23`;do scp -r /root/.kube 192.168.1.$i:/root/;done
    config                                                                                              100% 6260     6.1KB/s   00:00    
    config                                                                                              100% 6260     6.1KB/s   00:00    
    config                                                                                              100% 6260     6.1KB/s   00:00    
    config                                                                                              100% 6260     6.1KB/s   00:00    
    config                                                                                              100% 6260     6.1KB/s   00:00    
    [root@k8s-master01 ~]# 

    kubeletkube-proxy 等 Node 机器上的进程与 Master 机器的 kube-apiserver 进程通信时需要认证和授权;

    kubernetes 1.4 开始支持由 kube-apiserver 为客户端生成 TLS 证书的 TLS Bootstrapping 功能,这样就不需要为每个客户端生成证书了;该功能当前仅支持为 kubelet 生成证书;

    创建 TLS Bootstrapping Token

    Token auth file

    Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。

    [root@k8s-master01 ssl]# cd /etc/kubernetes/
    [root@k8s-master01 kubernetes]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    [root@k8s-master01 kubernetes]# cat > token.csv <<EOF
    > ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    > EOF
    [root@k8s-master01 kubernetes]# ls
    ssl  token.csv
    [root@k8s-master01 kubernetes]# cat token.csv 
    bd962dfaa4b87d896c4e944f113428d3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    [root@k8s-master01 kubernetes]# 

    将token.csv发到所有机器(Master 和 Node)的 /etc/kubernetes/ 目录。

    [root@k8s-master01 kubernetes]# for i in `seq 19 23`; do scp token.csv 192.168.1.$i:/etc/kubernetes/;done
    token.csv                                                                                           100%   84     0.1KB/s   00:00    
    token.csv                                                                                           100%   84     0.1KB/s   00:00    
    token.csv                                                                                           100%   84     0.1KB/s   00:00    
    token.csv                                                                                           100%   84     0.1KB/s   00:00    
    token.csv                                                                                           100%   84     0.1KB/s   00:00    
    [root@k8s-master01 kubernetes]#

    创建 kubelet bootstrapping kubeconfig 文件

    kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限创建认证请求(certificatesigningrequests)。

    先创建认证请求 user 为 master 中 token.csv 文件里配置的用户 只需在一个node中创建一次就可以
    Master节点执行

    [root@k8s-master01 bin]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

    创建kubelet  kubeconfig文件

    拷贝kubectl二进制文件
    [root@k8s-master01 bin]# cp kubectl /usr/bin/

    [root@k8s-master01 bin]# cd /etc/kubernetes/

    #配置集群 [root@k8s-master01 kubernetes]# kubectl config set-cluster kubernetes > --certificate-authority=/etc/kubernetes/ssl/ca.pem > --embed-certs=true > --server=https://192.168.1.24:6444 > --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set.

    #配置客户端认证 [root@k8s
    -master01 kubernetes]# kubectl config set-credentials kubelet-bootstrap > --token=bd962dfaa4b87d896c4e944f113428d3 > --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set.

    #配置关联 [root@k8s-master01 kubernetes]# kubectl config set-context default > --cluster=kubernetes > --user=kubelet-bootstrap > --kubeconfig=bootstrap.kubeconfig Context "default" created.

    #配置默认关联 [root@k8s
    -master01 kubernetes]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig Switched to context "default". [root@k8s-master01 kubernetes]# ls bootstrap.kubeconfig ssl token.csv

    #分发文件 [root@k8s
    -master01 kubernetes]# for i in `seq 19 23`; do scp bootstrap.kubeconfig 192.168.1.$i:/etc/kubernetes/;done bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 [root@k8s-master01 kubernetes]#

    创建kube-proxy kubeconfig文件

    [root@k8s-master01 ~]# cd /etc/kubernetes/
    
    #Node节点 配置集群
    [root@k8s-master01 kubernetes]# kubectl config set-cluster kubernetes 
    >   --certificate-authority=/etc/kubernetes/ssl/ca.pem 
    >   --embed-certs=true 
    >   --server=https://192.168.1.24:6444 
    >   --kubeconfig=kube-proxy.kubeconfig
    Cluster "kubernetes" set.
    
    #配置客户端认证
    [root@k8s-master01 kubernetes]# kubectl config set-credentials kube-proxy 
    >   --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem 
    >   --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem 
    >   --embed-certs=true 
    >   --kubeconfig=kube-proxy.kubeconfig
    User "kube-proxy" set.
    
    #配置关联
    [root@k8s-master01 kubernetes]# kubectl config set-context default 
    >   --cluster=kubernetes 
    >   --user=kube-proxy 
    >   --kubeconfig=kube-proxy.kubeconfig
    Context "default" created.
    
    #配置默认关联
    [root@k8s-master01 kubernetes]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    Switched to context "default".
    [root@k8s-master01 kubernetes]# ls
    bootstrap.kubeconfig  kube-proxy.kubeconfig  ssl  token.csv
    [root@k8s-master01 kubernetes]# 

    #分发文件到所有node节点即可
    [root@k8s-master01 kubernetes]# for i in `seq 19 23`; do scp kube-proxy.kubeconfig 192.168.1.$i:/etc/kubernetes/;done

     kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
     kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
      kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
      kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
      kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00

    2. etcd高可用部署

    3. master节点配置

    安装

    [root@k8s-master01 src]# tar zxvf flannel-v0.9.0-linux-amd64.tar.gz 
    flanneld
    mk-docker-opts.sh
    README.md
    [root@k8s-master01 src]# mv flanneld /usr/bin/
    [root@k8s-master01 src]# mv mk-docker-opts.sh /usr/bin/
    [root@k8s-master01 src]# for i in `seq 19 23`;do scp /usr/bin/flanneld /usr/bin/mk-docker-opts.sh 192.168.1.$i:/usr/bin/ ;done
    flanneld                                                                                            100%   33MB  32.9MB/s   00:00    
    mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
    flanneld                                                                                            100%   33MB  32.9MB/s   00:00    
    mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
    flanneld                                                                                            100%   33MB  32.9MB/s   00:01    
    mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
    flanneld                                                                                            100%   33MB  32.9MB/s   00:00    
    mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
    flanneld                                                                                            100%   33MB  32.9MB/s   00:01    
    mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
    [root@k8s-master01 src]# 

    所有master节点分发二进制程序

    [root@k8s-master01 bin]# for i in `seq 18 20`;do scp kube-apiserver kube-controller-manager kube-scheduler 192.168.1.$i:/usr/bin/;donekube-apiserver                                                                                      100%  176MB  88.2MB/s   00:02    
    kube-controller-manager                                                                             100%  131MB  65.3MB/s   00:02    
    kube-scheduler                                                                                      100%   73MB  72.6MB/s   00:01    
    kube-apiserver                                                                                      100%  176MB  58.8MB/s   00:03    
    kube-controller-manager                                                                             100%  131MB  65.3MB/s   00:02    
    kube-scheduler                                                                                      100%   73MB  72.6MB/s   00:01    
    kube-apiserver                                                                                      100%  176MB  58.8MB/s   00:03    
    kube-controller-manager                                                                             100%  131MB  65.3MB/s   00:02    
    kube-scheduler                                                                                      100%   73MB  72.6MB/s   00:01

    添加CA证书到系统信任库

    使用动态CA配置

    update-ca-trust force-enable

    拷贝ca根证书到指定目录

    cp /etc/kubernetes/ssl/ca.pem /etc/pki/ca-trust/source/anchors/

    生效

    update-ca-trust extract

    5.3 配置flannel的ip段

    etcd节点执行  此网段为上面提到的pod网段

    [root@k8s-master01 src]# etcdctl --endpoint https://192.168.1.18:2379 set /flannel/network/config '{"Network":"172.17.0.0/16"}'
    {"Network":"172.17.0.0/16"}
    [root@k8s-master01 src]#

    5.4 配置flannel

    设置flanneld.service

    [root@k8s-master01 system]# cat /usr/lib/systemd/system/flanneld.service 
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network.target
    After=network-online.target
    Wants=network-online.target
    After=etcd.service
    Before=docker.service
        
    [Service]
    Type=notify
    EnvironmentFile=/etc/sysconfig/flanneld
    EnvironmentFile=-/etc/sysconfig/docker-network
    ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
    ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    RequiredBy=docker.service
    [root@k8s-master01 system]#

    配置文件:

    [root@k8s-master01 system]# cat  /etc/sysconfig/flanneld
    FLANNEL_ETCD_ENDPOINTS="https://192.168.1.18:2379,https://192.168.1.19:2379,https://192.168.1.20:2379"
    FLANNEL_ETCD_PREFIX="/flannel/network"
    FLANNEL_OPTIONS="--iface=eth0"

    #iface为物理网卡名

    [root@k8s-master01 system]# cat /etc/sysconfig/docker-network
    CKER_NETWORK_OPTIONS=

    #可以为空 

    [root@k8s-master01 system]# cat /usr/bin/flanneld-start
    #!/bin/sh

    exec /usr/bin/flanneld
    -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}}
    -etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}}
    "$@"

    [root@k8s-master01 system]# chmod +x /usr/bin/flanneld-start

    确保docker已停止

    systemctl stop docker

    启动flanneld服务

    systemctl daemon-reload 
    systemctl enable flanneld
    systemctl start flanneld

    验证

    [root@k8s-master01 system]# ifconfig flannel0
    flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
            inet 172.17.2.0  netmask 255.255.0.0  destination 172.17.2.0
            unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    配置kube-apiserver

    创建日志目录

    [root@k8s-master01 ~]# mkdir /var/log/kubernetes

    配置service文件

    [root@k8s-master01 system]# cat /usr/lib/systemd/system/kube-apiserver.service 
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    User=root
    ExecStart=/usr/bin/kube-apiserver 
      --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota 
      --advertise-address=192.168.1.18 
      --allow-privileged=true 
      --apiserver-count=3 
      --audit-log-maxage=30 
      --audit-log-maxbackup=3 
      --audit-log-maxsize=100 
      --audit-log-path=/var/lib/audit.log 
      --authorization-mode=RBAC 
      --bind-address=192.168.1.18 
      --client-ca-file=/etc/kubernetes/ssl/ca.pem 
      --enable-swagger-ui=true 
      --etcd-cafile=/etc/kubernetes/ssl/ca.pem 
      --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem 
      --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem 
      --etcd-servers=https://192.168.1.18:2379,https://192.168.1.19:2379,https://192.168.1.20:2379 
      --event-ttl=1h 
      --kubelet-https=true 
      --insecure-bind-address=192.168.1.18 
      --runtime-config=rbac.authorization.k8s.io/v1alpha1 
      --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem 
      --service-cluster-ip-range=172.16.0.0/16 
      --service-node-port-range=30000-32000 
      --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem 
      --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem 
      --experimental-bootstrap-token-auth 
      --token-auth-file=/etc/kubernetes/token.csv 
      --logtostderr=false 
      --log-dir=/var/log/kubernetes 
      --v=2
    Restart=on-failure
    RestartSec=5
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    [root@k8s-master01 system]# 
    #其中--service-cluster-ip-range=172.16.0.0/16 就是上面提到的service 网段,这里面要注意的是 --service-node-port-range=30000-32000
    这个地方是 映射外部端口时 的端口范围,随机映射也在这个范围内映射,指定映射端口必须也在这个范围内。

    #启动服务
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl start kube-apiserver
    systemctl status kube-apiserver

    配置kube-controller-manager

    配置service文件

    [root@k8s-master01 kubernetes]# cat  /usr/lib/systemd/system/kube-controller-manager.service 
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    ExecStart=/usr/bin/kube-controller-manager 
      --address=0.0.0.0 
      --master=http://192.168.1.24:8081 
      --allocate-node-cidrs=true 
      --service-cluster-ip-range=172.16.0.0/16 
      --cluster-cidr=172.17.0.0/16 
      --cluster-name=kubernetes 
      --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem 
      --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem 
      --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem 
      --root-ca-file=/etc/kubernetes/ssl/ca.pem 
      --leader-elect=true 
      --logtostderr=false 
      --log-dir=/var/log/kubernetes 
      --v=2
    Restart=on-failure
    RestartSec=5
    [Install]
    WantedBy=multi-user.target
    [root@k8s-master01 kubernetes]#
    
    
    #启动服务
    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl start kube-controller-manager
    systemctl status kube-controller-manager

    配置kube-controller-scheduler

    4. node节点配置

    5. 安装私有仓库harbor并高可用配置

    6. 安装dns插件

    7. 安装 dashboard

    8. 安装监控插件

  • 相关阅读:
    c--日期和时间函数
    笔试题:360找镇长的题。
    【JavaScript】BOM和DOM
    也谈在 .NET 平台上使用 Scala 语言(续)
    生成n个元素的全排列 C实现
    jsp安全性问题
    stm32DMA通道 ADC通道
    POJ 1860
    Codeforces Round #FF (Div. 2) A. DZY Loves Hash
    Configure the modules to be find by modprobe
  • 原文地址:https://www.cnblogs.com/zhaojianbo/p/7797648.html
Copyright © 2020-2023  润新知