• kubernetes学习之二进制部署1.16


    服务器规划和系统初始化

    一、服务器规划

    10.255.20.205    Master01      kube-apiserver、kube-controller-manager、kube-scheduler、ETCD
    10.255.20.6      Master02      kube-apiserver、kube-controller-manager、kube-scheduler、ETCD
    10255.20.242     Master03      kube-apiserver、kube-controller-manager-kube-scheduler、ETCD
    
    10.255.20.117    Node01        kubelet、kube-proxy、docker
    10.255.20.176    Node02        kubelet、kube-proxy、docker

    二、系统初始化(所有节点全部执行)

    关闭防火墙:
    # systemctl stop firewalld
    # systemctl disable firewalld
    
    关闭selinux:
    # setenforce 0 # 临时
    # sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
    
    关闭swap:
    # swapoff -a  # 临时
    # vim /etc/fstab  # 永久
    
    同步系统时间:
    # ntpdate time.windows.com
    
    添加hosts:
    # vim /etc/hosts
    10.255.20.205 master01
    10.255.20.6 master02
    10.255.20.242 master03
    10.255.20.117 node01
    10.255.20.176 node02
    
    修改主机名:
    hostnamectl set-hostname node-name

    2.安装依赖和升级内核到5.x的最新版本

    function Install_depend_environment(){
        rpm -qa | grep nfs-utils &> /dev/null && echo -e "已完成依赖环境安装,退出依赖环境安装步骤 " && return
        yum install -y nfs-utils curl yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl telnet
        echo -e "升级Centos7系统内核到5版本,解决Docker-ce版本兼容问题"
        rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && 
        rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && 
        yum --disablerepo=* --enablerepo=elrepo-kernel repolist && 
        yum --disablerepo=* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64 && 
        yum remove -y kernel-tools-libs.x86_64 kernel-tools.x86_64 && 
        yum --disablerepo=* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64 && 
        grub2-set-default 0
        modprobe br_netfilter
        cat <<EOF >  /etc/sysctl.d/k8s.conf
            net.bridge.bridge-nf-call-ip6tables = 1
            net.bridge.bridge-nf-call-iptables = 1
            net.ipv4.ip_forward = 1
        EOF
        sysctl -p /etc/sysctl.d/k8s.conf
        ls /proc/sys/net/bridge
    }

    3.重启机器之后,查看内核加载

    [root@master03 ~]# ls /proc/sys/net/bridge
    bridge-nf-call-arptables  bridge-nf-call-ip6tables  bridge-nf-call-iptables  bridge-nf-filter-pppoe-tagged  bridge-nf-filter-vlan-tagged  bridge-nf-pass-vlan-input-dev

    ETCD集群安装

    一、生成ETCD证书(可以任选一个ETCD的节点)

    # cd TLS/etcd/
    #ls               #默认下面有这几个文件
    ca-config.json  ca-csr.json   server-csr.json generate_etcd_cert.sh 
    
    
    # ca-config.json  和 ca-csr.json    用于生成ca.pem ca-key.pem ca.csr
    # server-csr.json   用于生成server.pem server-key.pem server.csr
    # generate_etcd_cert.sh   这是生成ca和server的脚本

    1.安装cfssl工具

    [root@master01 ~]#  cd TLS
    [root@master01 TLS]# ls
    cfssl  cfssl-certinfo  cfssljson  cfssl.sh  etcd  k8s
    [root@master01 TLS]#  ./cfssl.sh

    2.修改请求文件中hosts字段包含所有etcd节点IP

    [root@master01 TLS]# cd etcd/
    [root@master01 etcd]# vim server-csr.json 
    {
        "CN": "etcd",
        "hosts": [
            "10.255.20.205", #master1的IP
            "10.255.20.6",     #master2的IP
            "10.255.20.242"   #master3的IP
            ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }

    3.生成

    [root@master01 etcd]# ./generate_etcd_cert.sh   #可以执行这个脚本直接生成ca和server,也可以分开执行,下面演示分开执行

    4.查看生成的ca和server

    [root@master01 etcd]# ls *pem
    ca-key.pem  ca.pem  server-key.pem  server.pem

    二、部署三个ETCD节点

    1.在一个节点部署好ETCD

    [root@master01]# tar zxvf etcd.tar.gz
    [root@master01]# cd etcd
    [root@master01]#  cp TLS/etcd/{ca,server,server-key}.pem ssl  #证书
    [root@master01]#  cp -r etcd /opt
    [root@master01] cp etcd.service /usr/lib/systemd/system

    2.将master1上部署好的etcd分发到剩下两个节点

    [root@master01 ~]# scp –r etcd root@10.255.20.242:/opt 
    [root@master01 ~]# scp etcd.service root@10.255.20.242:/usr/lib/systemd/system
    
    [root@master01 ~]# scp –r etcd root@10.255.20.6:/opt 
    [root@master01 ~]# scp etcd.service root@10.255.20.6:/usr/lib/systemd/system

    3.分别在3个节点修改etcd配置文件

    主要是节点名称IP

    vi /opt/etcd/cfg/etcd.conf
    
    #[Member]
    ETCD_NAME="master01"  #每个节点都修改为自己的节点名称
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://10.255.20.205:2380"   #改为自己的IP 
    ETCD_LISTEN_CLIENT_URLS="https://10.255.20.205:2379" #改为自己的IP
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.255.20.205:2380" #改为自己的IP 
    ETCD_ADVERTISE_CLIENT_URLS="https://10.255.20.205:2379"       #改为自己的IP 
    ETCD_INITIAL_CLUSTER="master01=https://10.255.20.205:2380,master02=https://10.255.20.6:2380,master03=https://10.255.20.242:2380" #三个IP和端口
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"

    4.启动

    # systemctl start etcd
    # systemctl enable etcd

    5.查看集群健康状态

    # /opt/etcd/bin/etcdctl  
    --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.255.20.205:2379,https://10.255.20.117:2379,https://10.255.20.176:2379" cluster-health 
    member 37f20611ff3d9209 is healthy: got healthy result from https://10.255.20.205:2379
    member b10f0bac3883a232 is healthy: got healthy result from https://10.255.20.6:2379
    member b46624837acedac9 is healthy: got healthy result from https://10.255.20.242:2379
    cluster is healthy

    Master节点部署

    一、生成apiserver证书(随便再一个master节点操作,然后分发到另外两个节点)

    1.修改server-csr.json

    需要提前把LB和所有master节点都写上,如果后期需要加新master节点需要重新生成证书,在分发证书

    [root@master01 k8s]# cd TLS/k8s
    [root@master01 k8s]# vi server-csr.json 
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",
          "127.0.0.1",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local",
          "10.255.20.205",   #master1 ip
          "10.255.20.6",     #master2 ip
          "10.255.20.242",   #master3 ip
          "10.255.20.165"    #滴滴云内网LB的ip
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }

    2.生成证书

    [root@master01 k8s]# ./generate_k8s_cert.sh
    [root@master01 k8s]# ls *pem
    ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

    二、部署kube-apiserver、kube-controller-manager、kube-scheduler

    所有master组件的bin和cfg都在k8s-master里

    1.拷贝证书和拷贝三个master组件的启动文件

    [root@master01]# tar zxvf k8s-master.tar.gz
    [root@master01]# cd k8s-master/kubernetes/
    [root@master01 k8s-master]# tree kubernetes/
    kubernetes/
    ├── bin
    │   ├── kube-apiserver
    │   ├── kube-controller-manager
    │   ├── kubectl
    │   └── kube-scheduler
    ├── cfg
    │   ├── kube-apiserver.conf
    │   ├── kube-controller-manager.conf
    │   ├── kube-scheduler.conf
    │   └── token.csv
    ├── logs
    └── ssl
    
    [root@master01 kubernetes]# cp /root/TLS/k8s/*.pem   ssl
    [root@master01 kubernetes]# cd ..
    [root@master01 k8s-master]# cp kubernetes /opt  
    [root@master01 k8s-master]# cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system

    2.修改apiserver相关配置文件

    [root@master01 k8s-master]# cd kubernetes/cfg/
    [root@master01 cfg] vim kube-apiserver.conf 
    KUBE_APISERVER_OPTS="--logtostderr=false 
    --v=2 
    --log-dir=/opt/kubernetes/logs 
    --etcd-servers=https://10.255.20.205:2379,https://10.255.20.117:2379,https://10.255.20.176:2379   #ETCD集群链接
    --bind-address=10.255.20.205      #本机IP
    --secure-port=6443 
    --advertise-address=10.255.20.205  #本机IP
    --allow-privileged=true 
    --service-cluster-ip-range=10.0.0.0/24  #service的clusterIP地址段, 跟kube-controller-manager.conf和kube-proxy-config.yml对应
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction 
    --authorization-mode=RBAC,Node 
    --enable-bootstrap-token-auth=true 
    --token-auth-file=/opt/kubernetes/cfg/token.csv 
    --service-node-port-range=30000-32767 
    --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem 
    --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem 
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem 
    --client-ca-file=/opt/kubernetes/ssl/ca.pem 
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem 
    --etcd-cafile=/opt/etcd/ssl/ca.pem 
    --etcd-certfile=/opt/etcd/ssl/server.pem 
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem 
    --audit-log-maxage=30 
    --audit-log-maxbackup=3 
    --audit-log-maxsize=100 
    --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

    3.修改kube-controller-manager相关配置文件

    [root@master01 cfg]vim kube-controller-manager.conf
    
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false 
    --v=2 
    --log-dir=/opt/kubernetes/logs 
    --leader-elect=true 
    --master=127.0.0.1:8080   #链接api-server,为本地不安全地址
    --address=127.0.0.1 
    --allocate-node-cidrs=true 
    --cluster-cidr=10.244.0.0/16   #Pod的IP地址段
    --service-cluster-ip-range=10.0.0.0/24  #service  clusterIP段,跟kube-apiserver.conf和kube-proxy-config.yaml对应
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem 
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  
    --root-ca-file=/opt/kubernetes/ssl/ca.pem 
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem 
    --experimental-cluster-signing-duration=87600h0m0s"

    4.修改kube-scheduler相关配置文件

    [root@master01 cfg] vim  kube-scheduler.conf
    KUBE_SCHEDULER_OPTS="--logtostderr=false 
    --v=2 
    --log-dir=/opt/kubernetes/logs 
    --leader-elect 
    --master=127.0.0.1:8080   #链接apiserver
    --address=127.0.0.1"

    5.启动和开机自启master组件

    # systemctl start kube-apiserver
    # systemctl start kube-controller-manager
    # systemctl start kube-scheduler
    # systemctl enable kube-apiserver
    # systemctl enable kube-controller-manager
    # systemctl enable kube-scheduler

    6.分发master相关组件到其他两个节点(kubernetes目录和启动文件)

    #在准备好master的master01上操作
    [root@master01]  scp –r /opt/kubernetes root@10.255.20.6:/opt
    [root@master01]   scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@10.255.20.6:/usr/lib/systemd/system
    
    
    

    7.在其他两个节点修改apiserver的配置文件为本机IP,并启动和开机自启相关master组件

    #####修改剩下两个节点apiserver的配置文件为本地IP#####
    #在剩下两个节点操作
    vim /opt/kubernetes/cfg/kube-apiserver.conf
    KUBE_APISERVER_OPTS="--logtostderr=false 
    --v=2 
    --log-dir=/opt/kubernetes/logs 
    --etcd-servers=https://10.255.20.205:2379,https://10.255.20.6:2379,https://10.255.20.242:2379 
    --bind-address=10.255.20.6 
    --secure-port=6443 
    --advertise-address=10.255.20.6 
    # systemctl start kube-apiserver
    # systemctl start kube-controller-manager
    # systemctl start kube-scheduler
    # systemctl enable kube-apiserver
    # systemctl enable kube-controller-manager
    # systemctl enable kube-scheduler

    三、启用TLS bootstrapping

    1.为kubelet TLB bootstrapping授权

    [root@master01]cat /opt/kubernetes/cfg/token.csv 
    c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

    2.给kubelet-bootstrap授权(一个master节点执行就可以)

    [root@master01]kubectl create clusterrolebinding kubelet-bootstrap 
    --clusterrole=system:node-bootstrapper 
    --user=kubelet-bootstrap

    注意:token可以自己生成替换

    head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    #但apiserver配置的token必须要与node节点bootstrap.kubeconfig配置里一致。
    master的kube-apiserver.conf中的配置 --token-auth-file=/opt/kubernetes/cfg/token.csv
    node的 bootstrap.kubeconfig里 token: c47ffb939f5ca36231d9e3121a252940就是apiserver.conf里token.csv的内容

    部署Node节点相关组件

    一、安装docker(两个node节点)

    二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/

    [root@node01 ]# tar zxvf k8s-node.tar.gz
    [root@node01 ]# tar zxvf docker-18.09.6.tgz
    [root@node01 ]# mv docker/* /usr/bin
    [root@node01 ]# mkdir /etc/docker
    [root@node01 ]# mv daemon.json /etc/docker
    [root@node01 ]# mv docker.service /usr/lib/systemd/system
    [root@node01 ]# systemctl start docker
    [root@node01 ]# systemctl enable docker

    二、部署kubelet和kube-proxy

    1.拷贝工作目录和启动配置文件(两个node节点)

    [root@node01]# tree kubernetes
    kubernetes/
    ├── bin
    │   ├── kubelet
    │   └── kube-proxy
    ├── cfg
    │   ├── bootstrap.kubeconfig
    │   ├── kubelet.conf
    │   ├── kubelet-config.yml
    │   ├── kube-proxy.conf
    │   ├── kube-proxy-config.yml
    │   └── kube-proxy.kubeconfig
    ├── logs
    └── ssl
    [root@node01] # mv kubernetes /opt
    [root@node01] # cp kubelet.service kube-proxy.service /usr/lib/systemd/system

    2.从master上拷贝2node节点需要的证书

    [root@master01]# cd TLS/k8s
    [root@master01 k8s]# scp ca.pem kube-proxy*.pem root@10.255.20.117:/opt/kubernetes/ssl/
    [root@master01 k8s]# scp ca.pem kube-proxy*.pem root@10.255.20.176:/opt/kubernetes/ssl/

    3.在两个node节点修改配置文件中apiserver的IP

    为啥3个master节点只写了一个节点的IP呢,因为6443是apiserver的IP,用了滴滴云的负载均衡,给master做了高可用,滴滴云LB的6443转发到3台master节点的6443,生成kubernetes证书的时候也把滴滴云的负载均衡IP加进去了 10.255.20.165

    bootstrap.kubeconfig:    server: https://10.255.20.165:6443
    kube-proxy.kubeconfig:    server: https://10.255.20.165:6443

    4.在两个node节点修改配置文件中自己的主机名

    kubelet.conf:--hostname-override=node01 
    kube-proxy-config.yml:hostnameOverride: node01
    
    kubelet.conf:--hostname-override=node02
    kube-proxy-config.yml:hostnameOverride: node02
    
    #上面的主机名,是注册到master显示的名称

    5.启动kubelet和kube-proxy

    # systemctl start kubelet
    # systemctl start kube-proxy
    # systemctl enable kubelet    
    # systemctl enable kube-proxy

    注意:node节点组件启动的时候,就向master申请证书了,需要到master去颁发下证书

    6.在master上给两个node颁发证书

    [root@master01 ]# kubectl get csr
    [root@master01 ]# kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI
    [root@master01 ]# kubectl get node
    NAME     STATUS   ROLES    AGE   VERSION
    node01   NotReady    <none>   18h   v1.16.0
    node02   NotReady    <none>   18h   v1.16.0
    
    ###为啥是notReady呢,因为还没有cni网络插件

    7.部署CNI网络插件

    二进制包下载地址:https://github.com/containernetworking/plugins/releases

    7.1.1每个node上部署CNI插件包和创建CNI插件目录

    # mkdir /opt/cni/bin /etc/cni/net.d -p
    # tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin

    7.1.2确保kubelet启动CNI

    # cat /opt/kubernetes/cfg/kubelet.conf 
    --network-plugin=cni

    7.1.3在master上部署flannel

    [root@master ] # kubectl apply -f kube-flannel.yaml
    [root@master ] # kubectl get pods -n kube-system
    NAME                          READY   STATUS    RESTARTS   AGE
    kube-flannel-ds-amd64-5xmhh   1/1     Running   6          171m
    kube-flannel-ds-amd64-ps5fx   1/1     Running   0          150m
    
    注意:kube-flannel.yaml 里
    net-conf.json: |
        {
          "Network": "10.244.0.0/16",      
          "Backend": {
            "Type": "vxlan"
          }
    } 
    ##上面这段配置中10.244.0.0/16必须得跟kube-controller-manager.conf 里--cluster-cidr=10.244.0.0/16 这段配置的网段一样

    注意:flannel在每个node上都会启动一个容器

    8.授权apiserver访问kubelet

    为提供安全性,kubelet禁止匿名访问,必须授权才可以,授权之后才能查看pod日志之类的。

    [root@master]# cat /opt/kubernetes/cfg/kubelet-config.yml 
    ……
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
    ……
    
    [root@master]# kubectl apply –f apiserver-to-kubelet-rbac.yaml

    部署WebUI和DNS

    一、部署WebUI

    https://kubernetes.io/docs/tasks/access-application-clu ster/web-ui-dashboard/

    [root@master]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
    
    # vi recommended.yaml
    …
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
    …
    
    [root@master]# kubectl apply -f recommended.yaml

    1.创建service account并绑定默认cluster-admin管理员集群角色

    [root@master]# cat dashboard-adminuser.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding 
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard

    2.获取token

    [root@master]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

    访问地址:http://NodeIP:30001

    使用输出的token登录Dashboard

    二、部署DNS

    [root@master]# kubectl apply –f coredns.yaml
    [root@master]# kubectl get pods -n kube-system
  • 相关阅读:
    Sampling Distribution of the Sample Mean|Central Limit Theorem
    OS L2-3: Process Creation and Operations
    c++函数重载、内联函数、类、友元
    命名空间及异常处理
    C++继承与多态,代码复用之泛型和模板
    ORB_GMS图像对齐
    ORB对齐
    [转]OpenCV中ORB特征点检测和匹配简单用法
    [转]OpenCV学习笔记】之鼠标的调用
    [转]OpenCV—Mat类
  • 原文地址:https://www.cnblogs.com/chadiandianwenrou/p/11905421.html
Copyright © 2020-2023  润新知