• 利用ansible来做kubernetes 1.10.3集群高可用的一键部署


    请读者务必保持环境一致

    安装过程中需要下载所需系统包,请务必使所有节点连上互联网

    本次安装的集群节点信息


    实验环境:VMware的虚拟机

    IP地址主机名CPU内存
    192.168.77.133 k8s-m1 6核 6G
    192.168.77.134 k8s-m2 6核 6G
    192.168.77.135 k8s-m3 6核 6G
    192.168.77.136 k8s-n1 6核 6G
    192.168.77.137 k8s-n2 6核 6G
    192.168.77.138 k8s-n3 6核 6G

    另外由所有 master节点提供一组VIP 192.168.77.140

    本次安装的集群拓扑图


     
    image.png

    本次使用到的ROLE

    ansible role怎么用请看下面文章

    集群安装方式

    以static pod方式安装kubernetes ha高可用集群。

    Ansible管理节点操作


    OS: CentOS Linux release 7.4.1708 (Core)
    ansible: 2.5.3

    安装Ansible
    # yum -y install ansible
    # ansible --version
    ansible 2.5.3
      config file = /etc/ansible/ansible.cfg
      configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
      ansible python module location = /usr/lib/python2.7/site-packages/ansible
      executable location = /usr/bin/ansible
      python version = 2.7.5 (default, Aug  4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
    
    配置ansible
    # sed -i 's|#host_key_checking|host_key_checking|g' /etc/ansible/ansible.cfg
    
    下载role
    # yum -y install git
    # git clone https://github.com/kuailemy123/Ansible-roles.git /etc/ansible/roles
    正克隆到 '/etc/ansible/roles'...
    remote: Counting objects: 1767, done.
    remote: Compressing objects: 100% (20/20), done.
    remote: Total 1767 (delta 5), reused 24 (delta 4), pack-reused 1738
    接收对象中: 100% (1767/1767), 427.96 KiB | 277.00 KiB/s, done.
    处理 delta 中: 100% (639/639), done.
    
    下载kubernetes-files.zip文件

    这是为了适应国情,导出所需的谷歌docker image,方便大家使用。

    文件下载链接:https://pan.baidu.com/s/1BNMJLEVzCE8pvegtT7xjyQ 密码:qm4k

    # yum -y install unzip
    # unzip kubernetes-files.zip -d /etc/ansible/roles/kubernetes/files/
    
    配置主机信息
    # cat /etc/ansible/hosts
    [k8s-master]
    192.168.77.133
    192.168.77.134
    192.168.77.135
    [k8s-node]
    192.168.77.136
    192.168.77.137
    192.168.77.138
    [k8s-cluster:children]
    k8s-master
    k8s-node
    [k8s-cluster:vars]
    ansible_ssh_pass=123456
    

    k8s-master组为所有的master节点主机。k8s-node组为所有的node节点主机。k8s-cluster包含k8s-masterk8s-node组的所有主机。

    请注意, 主机名称请用小写字母, 大写字母会出现找不到主机的问题。

    配置playbook
    # cat /etc/ansible/k8s.yml
    ---
    # 初始化集群
    - hosts: k8s-cluster
      serial: "100%"
      any_errors_fatal: true
      vars:
        - ipnames:
            '192.168.77.133': 'k8s-m1'
            '192.168.77.134': 'k8s-m2'
            '192.168.77.135': 'k8s-m3'
            '192.168.77.136': 'k8s-n1'
            '192.168.77.137': 'k8s-n2'
            '192.168.77.138': 'k8s-n3'
      roles:
        - hostnames
        - repo-epel
        - docker
    
    # 安装master节点
    - hosts: k8s-master
      any_errors_fatal: true
      vars:
        - kubernetes_master: true
        - kubernetes_apiserver_vip: 192.168.77.140
      roles:
        - kubernetes
    
    # 安装node节点
    - hosts: k8s-node
      any_errors_fatal: true
      vars:
        - kubernetes_node: true
        - kubernetes_apiserver_vip: 192.168.77.140
      roles:
        - kubernetes
        
    # 安装addons应用
    - hosts: k8s-master
      any_errors_fatal: true
      vars:
        - kubernetes_addons: true
        - kubernetes_ingress_controller: nginx
        - kubernetes_apiserver_vip: 192.168.77.140
      roles:
        - kubernetes
    

    kubernetes_ingress_controller 还可以选择traefik

    执行playbook
    # ansible-playbook /etc/ansible/k8s.yml
    ......
    real    26m44.153s
    user    1m53.698s
    sys 0m55.509s
    
     
    asciicast
    验证集群版本
    # kubectl version
    Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    
    验证集群状态
    kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy
    kubectl -n kube-system get po -l k8s-app=kube-dns
    kubectl -n kube-system get po -l k8s-app=calico-node -o wide
    calicoctl node status
    kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard
    kubectl -n kube-system get po,svc | grep -E 'monitoring|heapster|influxdb'
    kubectl -n ingress-nginx get pods
    kubectl -n kube-system get po -l app=helm
    kubectl -n kube-system logs -f kube-scheduler-k8s-m2
    helm version
    

    这里就不写结果了。

    查看addons访问信息

    在第一台master服务器上

    kubectl cluster-info
    Kubernetes master is running at https://192.168.77.140:6443
    Elasticsearch is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
    heapster is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/heapster/proxy
    Kibana is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
    kube-dns is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    monitoring-grafana is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
    monitoring-influxdb is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy
    
    # cat ~/k8s_addons_access
    

    集群部署完成后,建议重启集群所有节点。



    作者:lework
    链接:https://www.jianshu.com/p/265cfb0811b2
    來源:简书
    简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。
  • 相关阅读:
    python 利用pyspark读取HDFS中CSV文件的指定列 列名重命名 并保存回HDFS
    python 利用pandas读取本地中CSV文件的指定列 列名重命名 并保存回本地
    CDH版本Hbase二级索引详细配置方案Solr key value index(二)中文分词
    CDH版本Hbase二级索引详细配置方案Solr key value index
    Seccon2017-pwn500-video_player
    Linux ASLR的实现
    0ctf2017-pages-choices
    33c3-pwn500-recurse
    关于C++中的string的小知识点
    Apache Kylin(三)Kylin上手
  • 原文地址:https://www.cnblogs.com/cheyunhua/p/9263205.html
Copyright © 2020-2023  润新知