• 使用kubeadm部署kubernetes1.9.1+coredns+kube-router(ipvs)高可用集群


    由于之前已经写了两篇部署kubernetes的文章,整个过程基本一致,所以这篇只着重说一下coredns和kube-router的部署。

    kube version: 1.9.1
    docker version: 17.03.2-ce
    OS version: debian stretch

    依然是三个master节点、一个node节点。

    1、准备镜像,自行科学下载。

    # docker images| grep 1.9.1
    gcr.io/google_containers/kube-apiserver-amd64            v1.9.1
    gcr.io/google_containers/kube-controller-manager-amd64   v1.9.1
    gcr.io/google_containers/kube-scheduler-amd64            v1.9.1
    gcr.io/google_containers/kube-proxy-amd64                v1.9.1
    

    2、安装新版本的kubeadm、kubectl、kubelet。

    # aptitude install -y kubeadm kubectl kubelet
    

    3、部署第一个master节点。准备kubeadm的配置文件,这里官方提供的配置说明并不完善,应该说无法使用,经过了一番查找和测试。

    # cat kubeadm-config-191.yml
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
      advertiseAddress: "192.168.5.62"
    etcd:
      endpoints:
      - "http://192.168.5.84:2379"
      - "http://192.168.5.85:2379"
      - "http://192.168.2.77:2379"
    kubernetesVersion: "v1.9.1"
    apiServerCertSANs:
    - uy06-04
    - uy06-05
    - uy08-10
    - uy08-11
    - 192.168.6.16
    - 192.168.6.17
    - 127.0.0.1
    - 192.168.5.62
    - 192.168.5.63
    - 192.168.5.107
    - 192.168.5.108
    - 30.0.0.1
    - 10.96.0.1
    - kubernetes
    - kubernetes.default
    - kubernetes.default.svc
    - kubernetes.default.svc.cluster
    - kubernetes.default.svc.cluster.local
    tokenTTL: 0s
    networking:
      podSubnet: 30.0.0.0/10
    apiServerExtraArgs:
      enable-swagger-ui: "true"
      insecure-bind-address: 0.0.0.0
      insecure-port: "8088"
      endpoint-reconciler-type: "lease"
    controllerManagerExtraArgs:
      address: 0.0.0.0
    schedulerExtraArgs:
      address: 0.0.0.0
    featureGates:
      CoreDNS: true
    kubeProxy:
      config:
        featureGates: "SupportIPVSProxyMode=true"
        mode: "ipvs"
    

    需要提醒一下的是,这里开启的是kube-proxy的ipvs模式,部署的时候部署的依然是kube-proxy,而不是kube-router。
    如果你打算使用kube-router作为网络插件,是可以不考虑kube-proxy的配置的,kube-proxy会被删掉。kube-router不仅取代kube-proxy代理svc,并且它也是网络插件。

    4、使用kubeadm执行初始化。

    # kubeadm init --config=kubeadm-config-191.yml --ignore-preflight-errors=all
    [init] Using Kubernetes version: v1.9.1
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
        [WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [uy06-04 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy06-04 uy06-05 uy08-10 uy08-11 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.62 192.168.6.16 192.168.6.17 127.0.0.1 192.168.5.62 192.168.5.63 192.168.5.107 192.168.5.108 30.0.0.1 10.96.0.1]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 90.501851 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node uy06-04 as master by adding a label and a taint
    [markmaster] Master uy06-04 tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: b1bd11.9ecfaaad5274f9d1
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join --token b1bd11.9ecfaaad5274f9d1 192.168.5.62:6443 --discovery-token-ca-cert-hash sha256:09438d4384c393880a5ac18e2d3d06b547dae7242061c18c03f0fbb1bad76ade
    

    验证kube-proxy的mode:

    # kubectl exec -it kube-proxy-hr48q -n kube-system -- sh
    
    # ps -ef
    UID        PID  PPID  C STIME TTY          TIME CMD
    root         1     0  0 Jan11 ?        00:04:12 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
    root     29012     0  0 19:24 ?        00:00:00 sh
    root     29043 29012  0 19:24 ?        00:00:00 ps -ef
    
    # cat /var/lib/kube-proxy/config.conf
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 5
    clusterCIDR: 30.0.0.0/10
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    featureGates: SupportIPVSProxyMode=true
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: ipvs    <- 这里
    oomScoreAdj: -999
    portRange: ""
    resourceContainer: /kube-proxy
    udpTimeoutMilliseconds: 250ms
    

    5、使master节点参与调度。

    # kubectl taint nodes --all node-role.kubernetes.io/master-
    

    6、部署kube-router。

    a、下载yaml文件。

    # curl -L -O https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml
    

    b、修改busybox的镜像拉取策略为imagePullPolicy: IfNotPresent,因为这里自动拉取镜像时总是连接超时,导致pod无法启动。

    c、应用yaml文件。

    # kubectl apply -f kubeadm-kuberouter-all-features.yaml
    

    7、这时,核心组件应该都运行起来了。

    # kubectl get po --all-namespaces
    NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
    kube-system   coredns-65dcdb4cf-mlr9j           1/1       Running   0          22h
    kube-system   kube-apiserver-uy06-04            1/1       Running   0          22h
    kube-system   kube-controller-manager-uy06-04   1/1       Running   0          22h
    kube-system   kube-proxy-hr48q                  1/1       Running   0          22h
    kube-system   kube-router-9lh8x                 1/1       Running   0          22h
    kube-system   kube-scheduler-uy06-04            1/1       Running   0          22h
    

    8、删除kube-proxy,并清除iptables规则。

    # kubectl delete ds kube-proxy -n kube-system
    # docker run --privileged --net=host gcr.io/google_containers/kube-proxy-amd64:v1.7.3 kube-proxy --cleanup-iptables
    

    9、部署另外两个master节点,尝试通过vip请求apiserver将node节点添加到集群,以及剩余的其他配置,请参考之前的两篇。

    10、部署完成后是这个样子。

    # kubectl get no
    NAME      STATUS    ROLES     AGE       VERSION
    uy02-07   Ready     <none>    1d        v1.9.1
    uy05-13   Ready     master    2d        v1.9.1
    uy08-07   Ready     <none>    1d        v1.9.1
    uy08-08   Ready     <none>    1d        v1.9.1
    
    # kubectl get po --all-namespaces
    NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
    default       frontend-66d686db4b-jkbdk               1/1       Running   0          1d
    default       redis-master-5fd44c4c6-gf4zm            1/1       Running   0          1d
    default       redis-slave-74fc6595b4-kp8sl            1/1       Running   0          1d
    default       redis-slave-74fc6595b4-shtx6            1/1       Running   0          1d
    default       snowflake-5c98868c55-8crlt              1/1       Running   0          3h
    default       snowflake-5c98868c55-9psss              1/1       Running   0          1d
    default       snowflake-5c98868c55-ccsfc              1/1       Running   0          1d
    default       snowflake-5c98868c55-p2tjh              1/1       Running   0          1d
    kube-system   coredns-65dcdb4cf-bv95f                 1/1       Running   0          2d
    kube-system   coredns-65dcdb4cf-cv48z                 1/1       Running   0          1h
    kube-system   coredns-65dcdb4cf-grxkw                 1/1       Running   0          1d
    kube-system   coredns-65dcdb4cf-n5kkm                 1/1       Running   0          1d
    kube-system   heapster-7bddb97655-5hbsp               1/1       Running   0          1d
    kube-system   heapster-7bddb97655-8dqgd               1/1       Running   0          1h
    kube-system   heapster-7bddb97655-fd4mb               1/1       Running   0          1d
    kube-system   heapster-7bddb97655-gznsm               1/1       Running   0          1d
    kube-system   kube-apiserver-uy05-13                  1/1       Running   0          1d
    kube-system   kube-apiserver-uy08-07                  1/1       Running   0          1d
    kube-system   kube-apiserver-uy08-08                  1/1       Running   0          1d
    kube-system   kube-controller-manager-uy05-13         1/1       Running   0          23h
    kube-system   kube-controller-manager-uy08-07         1/1       Running   0          23h
    kube-system   kube-controller-manager-uy08-08         1/1       Running   0          23h
    kube-system   kube-router-57mws                       1/1       Running   0          1d
    kube-system   kube-router-j6rks                       1/1       Running   0          2d
    kube-system   kube-router-mfwqv                       1/1       Running   0          1d
    kube-system   kube-router-txp8p                       1/1       Running   0          1d
    kube-system   kube-scheduler-uy05-13                  1/1       Running   0          23h
    kube-system   kube-scheduler-uy08-07                  1/1       Running   0          23h
    kube-system   kube-scheduler-uy08-08                  1/1       Running   1          23h
    kube-system   kubernetes-dashboard-79cb6d66b9-74cf4   1/1       Running   0          3h
    kubernator    kubernator-659cf655b6-9prx2             1/1       Running   0          1d
    monitoring    alertmanager-main-0                     2/2       Running   0          1h
    monitoring    alertmanager-main-1                     2/2       Running   0          1h
    monitoring    alertmanager-main-2                     2/2       Running   0          1h
    monitoring    grafana-6b67b479d5-zj66c                2/2       Running   0          1h
    monitoring    kube-state-metrics-6f7b5c94f-v42tj      2/2       Running   0          1h
    monitoring    node-exporter-5b8p2                     1/1       Running   0          1h
    monitoring    node-exporter-m85xx                     1/1       Running   0          1h
    monitoring    node-exporter-pg2qz                     1/1       Running   0          1h
    monitoring    node-exporter-x9lb6                     1/1       Running   0          1h
    monitoring    prometheus-k8s-0                        2/2       Running   0          1h
    monitoring    prometheus-k8s-1                        2/2       Running   0          1h
    monitoring    prometheus-operator-8697c7fff9-dpn9r    1/1       Running   0          1h
    
    # kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-2               Healthy   {"health": "true"}
    etcd-1               Healthy   {"health": "true"}
    etcd-0               Healthy   {"health": "true"}
    
    # kubectl cluster-info
    Kubernetes master is running at https://192.168.6.15:6443
    Heapster is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/heapster/proxy
    KubeDNS is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    

    补充:

    安装ipvsadm查看lvs规则。

    # aptitude install -y ipvsadm
    
    # ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.5.42:30001 rr
      -> 20.0.2.17:9090               Masq    1      0          497
    TCP  192.168.5.42:30211 rr
      -> 20.0.2.12:80                 Masq    1      0          497
    TCP  192.168.5.42:30900 rr
      -> 20.0.2.20:9090               Masq    1      0          250
      -> 20.0.4.15:9090               Masq    1      0          250
    TCP  192.168.5.42:30902 rr
      -> 20.0.2.19:3000               Masq    1      0          500
    TCP  192.168.5.42:30903 rr
      -> 20.0.0.19:9093               Masq    1      0          166
      -> 20.0.2.21:9093               Masq    1      0          166
      -> 20.0.4.14:9093               Masq    1      0          166
    TCP  192.168.5.42:31001 rr
      -> 20.0.0.8:80                  Masq    1      0          497
    TCP  10.96.0.1:443 rr persistent 10800
      -> 192.168.5.42:6443            Masq    1      2          0
      -> 192.168.5.104:6443           Masq    1      0          0
      -> 192.168.5.105:6443           Masq    1      0          0
    TCP  10.96.0.10:53 rr
      -> 20.0.0.2:53                  Masq    1      0          0
      -> 20.0.1.2:53                  Masq    1      0          0
      -> 20.0.2.2:53                  Masq    1      0          0
      -> 20.0.4.9:53                  Masq    1      0          0
    TCP  10.97.245.128:6379 rr
      -> 20.0.0.7:6379                Masq    1      0          0
    TCP  10.98.159.23:80 rr
      -> 20.0.2.17:9090               Masq    1      0          0
    TCP  10.101.179.96:8080 rr
      -> 20.0.4.12:8080               Masq    1      0          0
    TCP  10.101.209.232:80 rr
      -> 20.0.2.12:80                 Masq    1      0          0
    TCP  10.101.255.18:9090 rr
      -> 20.0.2.20:9090               Masq    1      0          0
      -> 20.0.4.15:9090               Masq    1      0          0
    TCP  10.104.53.117:8080 rr
      -> 20.0.4.13:8080               Masq    1      0          0
    TCP  10.105.5.201:3000 rr
      -> 20.0.2.19:3000               Masq    1      0          0
    TCP  10.105.21.201:80 rr
      -> 20.0.0.4:8082                Masq    1      0          0
      -> 20.0.1.3:8082                Masq    1      0          0
      -> 20.0.2.3:8082                Masq    1      0          0
      -> 20.0.4.10:8082               Masq    1      0          0
    TCP  10.105.113.2:6379 rr
      -> 20.0.1.8:6379                Masq    1      0          0
      -> 20.0.2.8:6379                Masq    1      0          0
    TCP  10.105.159.162:9093 rr
      -> 20.0.0.19:9093               Masq    1      0          0
      -> 20.0.2.21:9093               Masq    1      0          0
      -> 20.0.4.14:9093               Masq    1      0          0
    TCP  10.110.48.172:80 rr
      -> 20.0.0.8:80                  Masq    1      0          0
    UDP  10.96.0.10:53 rr
      -> 20.0.0.2:53                  Masq    1      0          14
      -> 20.0.1.2:53                  Masq    1      0          15
      -> 20.0.2.2:53                  Masq    1      0          15
      -> 20.0.4.9:53                  Masq    1      0          15
    
  • 相关阅读:
    [LeetCode] 827. Making A Large Island 建造一个巨大岛屿
    [LeetCode] 916. Word Subsets 单词子集合
    [LeetCode] 828. Count Unique Characters of All Substrings of a Given String 统计给定字符串的所有子串的独特字符
    [LeetCode] 915. Partition Array into Disjoint Intervals 分割数组为不相交的区间
    [LeetCode] 829. Consecutive Numbers Sum 连续数字之和
    背水一战 Windows 10 (122)
    背水一战 Windows 10 (121)
    背水一战 Windows 10 (120)
    背水一战 Windows 10 (119)
    背水一战 Windows 10 (118)
  • 原文地址:https://www.cnblogs.com/keithtt/p/8276929.html
Copyright © 2020-2023  润新知