• kubernetes之2---二进制部署k8s集群


    二进制部署k8s集群

    集群架构

    1601195361513

    服务 端口
    etcd 127.0.0.1:2379,2380
    kubelet 10250,10255
    kube-proxy 10256
    kube-apiserve 6443,127.0.0.1:8080
    kube-schedule 10251,10259
    kube-controll 10252,10257

    环境准备

    主机 IP 内存 软件
    k8s-master 10.0.0.11 1G etcd,api-server,controller-manager,scheduler
    k8s-node1 10.0.0.12 2G etcd,kubelet,kube-proxy,docker,flannel
    k8s-node2 10.0.0.13 2G ectd,kubelet,kube-proxy,docker,flannel
    k8s-node3 10.0.0.14 2G kubelet,kube-proxy,docker,flannel
    • 关闭selinuxfirewalldNetworkManagerpostfix(非必须)

    • 修改IP地址、主机名

    hostnamectl set-hostname 主机名
    sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
    
    • 添加hosts解析
    cat > /etc/hosts <<EOF
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.0.0.11 k8s-master
    10.0.0.12 k8s-node1
    10.0.0.13 k8s-node2
    10.0.0.14 k8s-node3
    EOF
    
    • 创建k8s配置文件目录
    mkdir /etc/kubernetes
    
    • k8s-node3配置ssh密钥免密登录所有节点
    ssh-keygen
    ssh-copy-id k8s-master
    ssh-copy-id k8s-node1
    ssh-copy-id k8s-node2
    ssh-copy-id k8s-node3
    

    注意: 如果SSH不使用默认22端口时

    cat > ~/.ssh/config <<EOF
    Port 12345
    EOF
    

    签发HTTPS证书

    根据认证对象可以将证书分成三类:

    • 服务器证书server cert:服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserver
    • 客户端证书client cert:用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
    • 对等证书peer cert(表示既是server cert又是client cert):双向证书,用于etcd集群成员间通信

    kubernetes集群需要的证书如下:

    • etcd 节点需要标识自己服务的server cert,也需要client cert与etcd集群其他节点交互,因此使用对等证书peer cert
    • master 节点需要标识apiserver服务的server cert,也需要client cert连接etcd集群,这里分别指定2个证书。
    • kubectlcalicokube-proxy 只需要client cert,因此证书请求中 hosts 字段可以为空。
    • kubelet证书比较特殊,不是手动生成,它由node节点TLS BootStrapapiserver请求,由master节点的controller-manager 自动签发,包含一个client cert 和一个server cert

    本架构使用的证书:参考文档

    • 一套对等证书(etcd-peer):etcd<-->etcd<-->etcd
    • 客户端证书(client):api-server-->etcd和flanneld-->etcd
    • 服务器证书(apiserver):-->api-server
    • 服务器证书(kubelet):api-server-->kubelet
    • 服务器证书(kube-proxy-client):api-server-->kube-proxy

    不使用证书:

    • 如果使用证书,每次访问etcd都必须指定证书;为了方便,etcd监听127.0.0.1,本机访问不使用证书。

    • api-server-->controller-manager

    • api-server-->scheduler


    在k8s-node3节点基于CFSSL工具创建CA证书,服务端证书,客户端证书。

    CFSSL是CloudFlare开源的一款PKI/TLS工具。 CFSSL 包含一个命令行工具 和一个用于签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。

    Github:https://github.com/cloudflare/cfssl
    官网:https://pkg.cfssl.org/
    参考:http://blog.51cto.com/liuzhengwei521/2120535?utm_source=oschina-app


    1. 准备证书颁发工具CFSSL
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson
    
    1. 创建ca证书配置文件
    mkdir /opt/certs && cd /opt/certs
    cat > /opt/certs/ca-config.json <<EOF
    {
        "signing": {
            "default": {
                "expiry": "175200h"
            },
            "profiles": {
                "server": {
                    "expiry": "175200h",
                    "usages": [
                        "signing",
                        "key encipherment",
                        "server auth"
                    ]
                },
                "client": {
                    "expiry": "175200h",
                    "usages": [
                        "signing",
                        "key encipherment",
                        "client auth"
                    ]
                },
                "peer": {
                    "expiry": "175200h",
                    "usages": [
                        "signing",
                        "key encipherment",
                        "server auth",
                        "client auth"
                    ]
                }
            }
        }
    }
    EOF
    
    1. 创建ca证书请求配置文件
    cat > /opt/certs/ca-csr.json <<EOF
    {
        "CN": "kubernetes-ca",
        "hosts": [
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "beijing",
                "L": "beijing",
                "O": "od",
                "OU": "ops"
            }
        ],
        "ca": {
            "expiry": "175200h"
        }
    }
    EOF
    
    1. 生成CA证书和私钥
    [root@k8s-node3 certs]# cfssl gencert -initca ca-csr.json|cfssljson -bare ca - 
    2020/12/14 09:59:31 [INFO] generating a new CA key and certificate from CSR
    2020/12/14 09:59:31 [INFO] generate received request
    2020/12/14 09:59:31 [INFO] received CSR
    2020/12/14 09:59:31 [INFO] generating key: rsa-2048
    2020/12/14 09:59:31 [INFO] encoded CSR
    2020/12/14 09:59:31 [INFO] signed certificate with serial number 541033833394022225124150924404905984331621873569
    [root@k8s-node3 certs]# ls 
    ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
    

    部署etcd集群

    主机名 IP 角色
    k8s-master 10.0.0.11 etcd lead
    k8s-node1 10.0.0.12 etcd follow
    k8s-node2 10.0.0.13 etcd follow

    1. k8s-node3签发etcd节点之间通信的证书
    cat > /opt/certs/etcd-peer-csr.json <<EOF
    {
        "CN": "etcd-peer",
        "hosts": [
            "10.0.0.11",
            "10.0.0.12",
            "10.0.0.13"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "beijing",
                "L": "beijing",
                "O": "od",
                "OU": "ops"
            }
        ]
    }
    EOF
    
    [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssljson -bare etcd-peer
    2020/12/14 10:05:22 [INFO] generate received request
    2020/12/14 10:05:22 [INFO] received CSR
    2020/12/14 10:05:22 [INFO] generating key: rsa-2048
    2020/12/14 10:05:23 [INFO] encoded CSR
    2020/12/14 10:05:23 [INFO] signed certificate with serial number 300469497136552423377618640775350926134698270185
    2020/12/14 10:05:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@k8s-node3 certs]# ls etcd-peer*
    etcd-peer.csr  etcd-peer-csr.json  etcd-peer-key.pem  etcd-peer.pem
    
    1. k8s-master,k8s-node1,k8s-node2安装etcd服务
    yum -y install etcd
    
    1. k8s-node3发送证书到k8s-master,k8s-node1,k8s-node2的/etc/etcd目录
    cd /opt/certs
    scp -rp *.pem root@10.0.0.11:/etc/etcd/
    scp -rp *.pem root@10.0.0.12:/etc/etcd/
    scp -rp *.pem root@10.0.0.13:/etc/etcd/
    
    1. k8s-master,k8s-node1,k8s-node2修改证书属主属组
    chown -R etcd:etcd /etc/etcd/*.pem
    
    1. k8s-master配置etcd
    cat > /etc/etcd/etcd.conf <<EOF
    ETCD_DATA_DIR="/var/lib/etcd/"
    ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
    ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
    ETCD_NAME="node1"
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
    ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
    ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    EOF
    
    1. k8s-node1配置etcd
    cat > /etc/etcd/etcd.conf <<EOF
    ETCD_DATA_DIR="/var/lib/etcd/"
    ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380"
    ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
    ETCD_NAME="node2"
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
    ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
    ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    EOF
    
    1. k8s-node1配置etcd
    cat > /etc/etcd/etcd.conf <<EOF
    ETCD_DATA_DIR="/var/lib/etcd/"
    ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380"
    ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
    ETCD_NAME="node3"
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
    ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
    ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
    ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    EOF
    
    1. k8s-master,k8s-node1,k8s-node2同时启动etcd服务并加入开机自启
    systemctl start etcd
    systemctl enable etcd
    
    1. k8s-master验证etcd集群
    [root@k8s-master ~]# etcdctl member list
    55fcbe0adaa45350: name=node3 peerURLs=https://10.0.0.13:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.13:2379 isLeader=true
    cebdf10928a06f3c: name=node1 peerURLs=https://10.0.0.11:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.11:2379 isLeader=false
    f7a9c20602b8532e: name=node2 peerURLs=https://10.0.0.12:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.12:2379 isLeader=false
    

    master节点安装

    1. k8s-node3下载二进制包,解压,并推送master节点所需服务到k8s-master

      本架构使用v1.15.4的kubernetes-server二进制包

    mkdir /opt/softs && cd /opt/softs
    
    wget https://storage.googleapis.com/kubernetes-release/release/v1.16.1/kubernetes-server-linux-amd64.tar.gz
    wget https://storage.googleapis.com/kubernetes-release/release/v1.15.4/kubernetes-server-linux-amd64.tar.gz
    
    tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz 
    cd /opt/softs/kubernetes/server/bin/
    scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl root@10.0.0.11:/usr/sbin/
    
    1. k8s-node3签发client证书
    cd /opt/certs/
    cat > /opt/certs/client-csr.json <<EOF
    {
        "CN": "k8s-node",
        "hosts": [
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "beijing",
                "L": "beijing",
                "O": "od",
                "OU": "ops"
            }
        ]
    }
    EOF
    
    [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json|cfssljson -bare client
    2020/12/14 11:24:13 [INFO] generate received request
    2020/12/14 11:24:13 [INFO] received CSR
    2020/12/14 11:24:13 [INFO] generating key: rsa-2048
    2020/12/14 11:24:13 [INFO] encoded CSR
    2020/12/14 11:24:13 [INFO] signed certificate with serial number 558115824565037436109754375250535796590542635717
    2020/12/14 11:24:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@k8s-node3 certs]# ls client*
    client.csr  client-csr.json  client-key.pem  client.pem
    
    1. k8s-node3签发kube-apiserver证书
    cat > /opt/certs/apiserver-csr.json <<EOF
    {
        "CN": "apiserver",
        "hosts": [
            "127.0.0.1",
            "10.254.0.1",
            "kubernetes.default",
            "kubernetes.default.svc",
            "kubernetes.default.svc.cluster",
            "kubernetes.default.svc.cluster.local",
            "10.0.0.11",
            "10.0.0.12",
            "10.0.0.13"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "beijing",
                "L": "beijing",
                "O": "od",
                "OU": "ops"
            }
        ]
    }
    EOF
    

    注意:pod资源创建时,使用环境变量导入clusterIP网段的第一个ip(10.254.0.1),做为pod访问api-server的内部IP,实现自动发现功能。

    [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssljson -bare apiserver
    2020/12/14 11:31:42 [INFO] generate received request
    2020/12/14 11:31:42 [INFO] received CSR
    2020/12/14 11:31:42 [INFO] generating key: rsa-2048
    2020/12/14 11:31:42 [INFO] encoded CSR
    2020/12/14 11:31:42 [INFO] signed certificate with serial number 418646719184970675117735868438071556604394393673
    2020/12/14 11:31:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@k8s-node3 certs]# ls apiserver*
    apiserver.csr  apiserver-csr.json  apiserver-key.pem  apiserver.pem
    
    1. k8s-node3推送证书给k8s-master
    scp -rp ca*pem apiserver*pem client*pem root@10.0.0.11:/etc/kubernetes
    

    安装api-server服务

    1. master节点查看证书
    [root@k8s-master kubernetes]# ls /etc/kubernetes
    apiserver-key.pem  apiserver.pem  ca-key.pem  ca.pem  client-key.pem  client.pem
    
    1. master节点配置api-server审计日志规则
    cat > /etc/kubernetes/audit.yaml <<EOF
    apiVersion: audit.k8s.io/v1beta1 # This is required.
    kind: Policy
    # Don't generate audit events for all requests in RequestReceived stage.
    omitStages:
      - "RequestReceived"
    rules:
      # Log pod changes at RequestResponse level
      - level: RequestResponse
        resources:
        - group: ""
          # Resource "pods" doesn't match requests to any subresource of pods,
          # which is consistent with the RBAC policy.
          resources: ["pods"]
      # Log "pods/log", "pods/status" at Metadata level
      - level: Metadata
        resources:
        - group: ""
          resources: ["pods/log", "pods/status"]
    
      # Don't log requests to a configmap called "controller-leader"
      - level: None
        resources:
        - group: ""
          resources: ["configmaps"]
          resourceNames: ["controller-leader"]
    
      # Don't log watch requests by the "system:kube-proxy" on endpoints or services
      - level: None
        users: ["system:kube-proxy"]
        verbs: ["watch"]
        resources:
        - group: "" # core API group
          resources: ["endpoints", "services"]
    
      # Don't log authenticated requests to certain non-resource URL paths.
      - level: None
        userGroups: ["system:authenticated"]
        nonResourceURLs:
        - "/api*" # Wildcard matching.
        - "/version"
    
      # Log the request body of configmap changes in kube-system.
      - level: Request
        resources:
        - group: "" # core API group
          resources: ["configmaps"]
        # This rule only applies to resources in the "kube-system" namespace.
        # The empty string "" can be used to select non-namespaced resources.
        namespaces: ["kube-system"]
    
      # Log configmap and secret changes in all other namespaces at the Metadata level.
      - level: Metadata
        resources:
        - group: "" # core API group
          resources: ["secrets", "configmaps"]
    
      # Log all other resources in core and extensions at the Request level.
      - level: Request
        resources:
        - group: "" # core API group
        - group: "extensions" # Version of group should NOT be included.
    
      # A catch-all rule to log all other requests at the Metadata level.
      - level: Metadata
        # Long-running requests like watches that fall under this rule will not
        # generate an audit event in RequestReceived.
        omitStages:
          - "RequestReceived"
    EOF
    
    1. master节点配置api-server.service
    cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=etcd.service
    [Service]
    ExecStart=/usr/sbin/kube-apiserver \
      --audit-log-path /var/log/kubernetes/audit-log \
      --audit-policy-file /etc/kubernetes/audit.yaml \
      --authorization-mode RBAC \
      --client-ca-file /etc/kubernetes/ca.pem \
      --requestheader-client-ca-file /etc/kubernetes/ca.pem \
      --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
      --etcd-cafile /etc/kubernetes/ca.pem \
      --etcd-certfile /etc/kubernetes/client.pem \
      --etcd-keyfile /etc/kubernetes/client-key.pem \
      --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
      --service-account-key-file /etc/kubernetes/ca-key.pem \
      --service-cluster-ip-range 10.254.0.0/16 \
      --service-node-port-range 30000-59999 \
      --kubelet-client-certificate /etc/kubernetes/client.pem \
      --kubelet-client-key /etc/kubernetes/client-key.pem \
      --log-dir  /var/log/kubernetes/ \
      --logtostderr=false \
      --tls-cert-file /etc/kubernetes/apiserver.pem \
      --tls-private-key-file /etc/kubernetes/apiserver-key.pem \
      --v 2
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    

    为了省事,apiserver和etcd通信,apiserver和kubelet通信共用一套client cert证书。

    --audit-log-path /var/log/kubernetes/audit-log  # 审计日志路径
    --audit-policy-file /etc/kubernetes/audit.yaml  # 审计规则文件
    --authorization-mode RBAC                       # 授权模式:RBAC
    --client-ca-file /etc/kubernetes/ca.pem         # client ca证书
    --requestheader-client-ca-file /etc/kubernetes/ca.pem  # 请求头 ca证书
    --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota  # 启用的准入插件
    --etcd-cafile /etc/kubernetes/ca.pem           # 与etcd通信ca证书
    --etcd-certfile /etc/kubernetes/client.pem     # 与etcd通信client证书
    --etcd-keyfile /etc/kubernetes/client-key.pem  # 与etcd通信client私钥
    --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 
    --service-account-key-file /etc/kubernetes/ca-key.pem  # ca私钥
    --service-cluster-ip-range 10.254.0.0/16               # VIP范围
    --service-node-port-range 30000-59999           # VIP端口范围
    --kubelet-client-certificate /etc/kubernetes/client.pem  # 与kubelet通信client证书
    --kubelet-client-key /etc/kubernetes/client-key.pem  # 与kubelet通信client私钥
    --log-dir  /var/log/kubernetes/   # 日志文件路径
    --logtostderr=false  # 关闭日志标准错误输出,就会输出到文件中
    --tls-cert-file /etc/kubernetes/apiserver.pem             # api服务证书
    --tls-private-key-file /etc/kubernetes/apiserver-key.pem  # api服务私钥
    --v 2  # 日志级别 2
    Restart=on-failure
    
    1. master节点创建日志目录,启动并开机启动apiserver
    mkdir /var/log/kubernetes
    systemctl daemon-reload
    systemctl start kube-apiserver.service
    systemctl enable kube-apiserver.service
    
    1. master节点检验
    [root@k8s-master kubernetes]# kubectl get cs
    NAME                 STATUS      MESSAGE                                                                                     ERROR
    scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
    controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
    etcd-1               Healthy     {"health":"true"}                                                                           
    etcd-2               Healthy     {"health":"true"}                                                                           
    etcd-0               Healthy     {"health":"true"}
    

    安装controller-manager服务

    1. master节点配置kube-controller-manager.service
    cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    After=kube-apiserver.service
    [Service]
    ExecStart=/usr/sbin/kube-controller-manager \
      --cluster-cidr 172.18.0.0/16 \
      --log-dir /var/log/kubernetes/ \
      --master http://127.0.0.1:8080 \
      --service-account-private-key-file /etc/kubernetes/ca-key.pem \
      --service-cluster-ip-range 10.254.0.0/16 \
      --root-ca-file /etc/kubernetes/ca.pem \
      --logtostderr=false \
      --v 2
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    
    1. master节点启动并开机启动controller-manager
    systemctl daemon-reload 
    systemctl enable kube-controller-manager.service
    systemctl start kube-controller-manager.service
    

    安装scheduler服务

    1. master节点配置kube-scheduler.service
    cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    After=kube-apiserver.service
    [Service]
    ExecStart=/usr/sbin/kube-scheduler \
      --log-dir /var/log/kubernetes/ \
      --master http://127.0.0.1:8080 \
      --logtostderr=false \
      --v 2
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    
    1. master节点启动并开机启动scheduler
    systemctl daemon-reload
    systemctl enable kube-scheduler.service
    systemctl start kube-scheduler.service
    
    1. master节点检验
    [root@k8s-master kubernetes]# kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-1               Healthy   {"health":"true"}   
    etcd-0               Healthy   {"health":"true"}   
    etcd-2               Healthy   {"health":"true"}
    

    node节点的安装

    安装kubelet服务

    1. k8s-node3节点签发kubelet证书
    cd /opt/certs/
    cat > kubelet-csr.json <<EOF
    {
        "CN": "kubelet-node",
        "hosts": [
        "127.0.0.1",
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13",
        "10.0.0.14",
        "10.0.0.15"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "beijing",
                "L": "beijing",
                "O": "od",
                "OU": "ops"
            }
        ]
    }
    EOF
    
    [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssljson -bare kubelet
    2020/12/14 14:55:00 [INFO] generate received request
    2020/12/14 14:55:00 [INFO] received CSR
    2020/12/14 14:55:00 [INFO] generating key: rsa-2048
    2020/12/14 14:55:00 [INFO] encoded CSR
    2020/12/14 14:55:00 [INFO] signed certificate with serial number 110678673830256746819664644693971611232380342377
    2020/12/14 14:55:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@k8s-node3 certs]# ls kubelet*
    kubelet.csr  kubelet-csr.json  kubelet-key.pem  kubelet.pem
    
    1. **k8s-node3生成kubelet客户端认证凭据kubelet.kubeconfig **
    ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/
    
    # 设置集群参数
    kubectl config set-cluster myk8s 
       --certificate-authority=/opt/certs/ca.pem 
       --embed-certs=true 
       --server=https://10.0.0.11:6443 
       --kubeconfig=kubelet.kubeconfig
    # 设置客户端认证参数
    kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig
    # 生成上下文参数
    kubectl config set-context myk8s-context 
       --cluster=myk8s 
       --user=k8s-node 
       --kubeconfig=kubelet.kubeconfig
    # 切换当前上下文
    kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
    
    # cat kubelet.kubeconfig 
    apiVersion: v1
    clusters:
    - cluster:    # 集群
        certificate-authority-data:  ... ... # ca证书
        server: https://10.0.0.11:6443       # apiserve服务地址
      name: myk8s # 集群名称
    contexts:
    - context:     # 上下文
        cluster: myk8s
        user: k8s-node
      name: myk8s-context
    current-context: myk8s-context # 当前上下文
    kind: Config
    preferences: {}
    users:
    - name: k8s-node # 用户名
      user:
        client-certificate-data: ... ... # client证书
        client-key-data:         ... ... # client私钥
    
    1. master节点创建rbac权限service资源(只需要创建一次)
    cat > k8s-node.yaml <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: k8s-node
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:node
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: k8s-node
    EOF
    kubectl create -f k8s-node.yaml
    
    1. node节点安装docker-ce启动并加入开机自启,并配置镜像加速,用systemd控制
    wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
    yum install docker-ce -y
    systemctl enable docker
    systemctl start docker
    cat > /etc/docker/daemon.json <<EOF
    {
      "registry-mirrors": ["https://registry.docker-cn.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
    systemctl restart docker.service
    docker info
    
    1. k8s-node3推送kubelet命令,客户端认证凭据和所需证书到node节点
    cd /opt/certs/
    scp -rp kubelet.kubeconfig ca*pem kubelet*pem root@10.0.0.12:/etc/kubernetes
    scp -rp /opt/softs/kubernetes/server/bin/kubelet root@10.0.0.12:/usr/bin/
    
    scp -rp kubelet.kubeconfig ca*pem kubelet*pem root@10.0.0.13:/etc/kubernetes
    scp -rp /opt/softs/kubernetes/server/bin/kubelet root@10.0.0.13:/usr/bin/
    
    1. node节点配置kubelet.service启动并开机启动
    mkdir /var/log/kubernetes
    cat > /usr/lib/systemd/system/kubelet.service <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service multi-user.target
    Requires=docker.service
    [Service]
    ExecStart=/usr/bin/kubelet \
      --anonymous-auth=false \
      --cgroup-driver systemd \
      --cluster-dns 10.254.230.254 \
      --cluster-domain cluster.local \
      --runtime-cgroups=/systemd/system.slice \
      --kubelet-cgroups=/systemd/system.slice \
      --fail-swap-on=false \
      --client-ca-file /etc/kubernetes/ca.pem \
      --tls-cert-file /etc/kubernetes/kubelet.pem \
      --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
      --hostname-override 10.0.0.12 \
      --image-gc-high-threshold 90 \
      --image-gc-low-threshold 70 \
      --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
      --log-dir /var/log/kubernetes/ \
      --pod-infra-container-image t29617342/pause-amd64:3.0 \
      --logtostderr=false \
      --v=2
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    systemctl daemon-reload
    systemctl enable kubelet.service
    systemctl start kubelet.service
    
    Requires=docker.service # 依赖服务
    [Service]
    ExecStart=/usr/bin/kubelet 
    --anonymous-auth=false          # 关闭匿名认证
    --cgroup-driver systemd         # 用systemd控制
    --cluster-dns 10.254.230.254    # DNS地址
    --cluster-domain cluster.local  # DNS域名,与DNS服务配置资源指定的一致
    --runtime-cgroups=/systemd/system.slice 
    --kubelet-cgroups=/systemd/system.slice 
    --fail-swap-on=false            # 关闭不使用swap
    --client-ca-file /etc/kubernetes/ca.pem                 # ca证书
    --tls-cert-file /etc/kubernetes/kubelet.pem             # kubelet证书
    --tls-private-key-file /etc/kubernetes/kubelet-key.pem  # kubelet密钥
    --hostname-override 10.0.0.13   # kubelet主机名, 各node节点不一样
    --image-gc-high-threshold 20    # 磁盘使用率超过20,始终运行镜像垃圾回收
    --image-gc-low-threshold 10     # 磁盘使用率小于10,从不运行镜像垃圾回收
    --kubeconfig /etc/kubernetes/kubelet.kubeconfig  # 客户端认证凭据
    --pod-infra-container-image t29617342/pause-amd64:3.0  # pod基础容器镜像
    

    注意:这里的pod基础容器镜像使用的是官方仓库t29617342用户的公开镜像!

    1. 其他node节点重复4.5.6步(注意:修改scp IP和hostname)

    2. master节点验证

    [root@k8s-master ~]# kubectl get nodes
    NAME        STATUS   ROLES    AGE     VERSION
    10.0.0.12   Ready    <none>   4m19s   v1.15.4
    10.0.0.13   Ready    <none>   13s     v1.15.4
    

    安装kube-proxy服务

    1. k8s-node3节点签发证书kube-proxy-client
    cd /opt/certs/
    cat > /opt/certs/kube-proxy-csr.json <<EOF
    {
        "CN": "system:kube-proxy",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "ST": "beijing",
                "L": "beijing",
                "O": "od",
                "OU": "ops"
            }
        ]
    }
    EOF
    
    [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssljson -bare kube-proxy-client
    2020/12/14 16:20:46 [INFO] generate received request
    2020/12/14 16:20:46 [INFO] received CSR
    2020/12/14 16:20:46 [INFO] generating key: rsa-2048
    2020/12/14 16:20:46 [INFO] encoded CSR
    2020/12/14 16:20:46 [INFO] signed certificate with serial number 364147028440857189661095322729307531340019233888
    2020/12/14 16:20:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    [root@k8s-node3 certs]# ls kube-proxy-c*
    kube-proxy-client.csr  kube-proxy-client-key.pem  kube-proxy-client.pem  kube-proxy-csr.json
    
    1. k8s-node3生成kubelet-proxy客户端认证凭据kube-proxy.kubeconfig
    kubectl config set-cluster myk8s 
       --certificate-authority=/opt/certs/ca.pem 
       --embed-certs=true 
       --server=https://10.0.0.11:6443 
       --kubeconfig=kube-proxy.kubeconfig
    kubectl config set-credentials kube-proxy 
       --client-certificate=/opt/certs/kube-proxy-client.pem 
       --client-key=/opt/certs/kube-proxy-client-key.pem 
       --embed-certs=true 
       --kubeconfig=kube-proxy.kubeconfig
    kubectl config set-context myk8s-context 
       --cluster=myk8s 
       --user=kube-proxy 
       --kubeconfig=kube-proxy.kubeconfig
    kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
    
    1. k8s-node3推送kube-proxy命令和客户端认证凭据到node节点
    scp -rp /opt/certs/kube-proxy.kubeconfig root@10.0.0.12:/etc/kubernetes/
    scp -rp /opt/certs/kube-proxy.kubeconfig root@10.0.0.13:/etc/kubernetes/
    scp -rp /opt/softs/kubernetes/server/bin/kube-proxy root@10.0.0.12:/usr/bin/
    scp -rp /opt/softs/kubernetes/server/bin/kube-proxy root@10.0.0.13:/usr/bin/
    
    1. node节点配置kube-proxy.service启动并开机启动(注意修改hostname-override)
    cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    [Service]
    ExecStart=/usr/bin/kube-proxy \
      --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
      --cluster-cidr 172.18.0.0/16 \
      --hostname-override 10.0.0.12 \
      --logtostderr=false \
      --v=2
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    systemctl daemon-reload
    systemctl enable kube-proxy.service
    systemctl start kube-proxy.service
    
    --cluster-cidr 172.18.0.0/16  # pod IP
    

    配置flannel网络

    1. 所有节点安装flannel(master节点安装方便测试)
    yum install flannel -y
    mkdir /opt/certs/
    
    1. k8s-node3节点签发证书(复用client证书),推送给其他所有节点
    cd /opt/certs/
    scp -rp ca.pem client*pem root@10.0.0.11:/opt/certs/
    scp -rp ca.pem client*pem root@10.0.0.12:/opt/certs/
    scp -rp ca.pem client*pem root@10.0.0.13:/opt/certs/
    
    1. etcd节点创建flannel的key
    # 通过这个key定义pod的ip地址范围
    etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }'
    

    注意:可能会失败,提示

    Error: x509: certificate signed by unknown authority

    多重试几次就好了。

    1. 所有节点配置flannel.service启动并开机启动
    cat > /etc/sysconfig/flanneld <<EOF
    FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem"
    EOF
    systemctl enable flanneld.service
    systemctl start flanneld.service
    
    1. k8s-node1和k8s-node2修改docker.service:添加参数,iptables开启转发
    sed -i '/ExecStart/c ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service
    sed -i '/ExecStart/i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service
    systemctl daemon-reload 
    systemctl restart docker
    

    docker启动时,需要使用flannel指定的参数DOCKER_NETWORK_OPTIONS,使两者网段一致。

    [root@k8s-node1 ~]# cat /run/flannel/docker 
    DOCKER_OPT_BIP="--bip=172.18.28.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=true"
    DOCKER_OPT_MTU="--mtu=1450"
    DOCKER_NETWORK_OPTIONS=" --bip=172.18.28.1/24 --ip-masq=true --mtu=1450"
    
    1. master验证各节点互通
    # docker0和flannel.1为172.18网段的相同网络
    ifconfig
    # 各node节点启动一个容器
    docker run -it alpine
    # 查看容器IP
    ifconfig
    # master节点ping所有node节点启动的容器,验证各节点互通
    
    1. master验证k8s集群

    ① 创建pod资源

    kubectl run nginx --image=nginx:1.13 --replicas=2
    kubectl get pod -o wide -A
    

    run将在未来被移除,以后用:

    kubectl create deployment test --image=nginx:1.13
    

    k8s高版本支持 -A参数

    -A, --all-namespaces # 如果存在,列出所有命名空间中请求的对象
    

    ② 创建svc资源

    kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
    kubectl get svc
    

    ③ 访问验证

    [root@k8s-master ~]# curl -I 10.0.0.12:55531
    HTTP/1.1 200 OK
    Server: nginx/1.13.12
    Date: Mon, 14 Dec 2020 09:27:20 GMT
    Content-Type: text/html
    Content-Length: 612
    Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
    Connection: keep-alive
    ETag: "5acb8e45-264"
    Accept-Ranges: bytes
    
    [root@k8s-master ~]# curl -I 10.0.0.13:55531
    HTTP/1.1 200 OK
    Server: nginx/1.13.12
    Date: Mon, 14 Dec 2020 09:27:23 GMT
    Content-Type: text/html
    Content-Length: 612
    Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
    Connection: keep-alive
    ETag: "5acb8e45-264"
    Accept-Ranges: bytes
    

    kubectl命令行TAB键补全:

    echo "source <(kubectl completion bash)" >> ~/.bashrc
    

    污点和容忍度

    节点和Pod的亲和力,用来将Pod吸引到一组节点【根据拓扑域】(作为优选或硬性要求)。

    污点(Taints)则相反,应用于node,它们允许一个节点排斥一组Pod。

    污点taints是定义在节点之上的key=value:effect,用于让节点拒绝将Pod调度运行于其上, 除非该Pod对象具有接纳节点污点的容忍度。

    容忍(Tolerations)应用于pod,允许(但不强制要求)pod调度到具有匹配污点的节点上。

    容忍度tolerations是定义在 Pod对象上的键值型属性数据,用于配置其可容忍的节点污点,而且调度器仅能将Pod对象调度至其能够容忍该节点污点的节点之上。

    img

    污点(Taints)和容忍(Tolerations)共同作用,确保pods不会被调度到不适当的节点。一个或多个污点应用于节点;这标志着该节点不应该接受任何不容忍污点的Pod。

    说明:我们在平常使用中发现pod不会调度到k8s的master节点,就是因为master节点存在污点。

    多个Taints污点和多个Tolerations容忍判断:

    可以在同一个node节点上设置多个污点(Taints),在同一个pod上设置多个容忍(Tolerations)。

    Kubernetes处理多个污点和容忍的方式就像一个过滤器:从节点的所有污点开始,然后忽略可以被Pod容忍匹配的污点;保留其余不可忽略的污点,污点的effect对Pod具有显示效果:


    污点

    污点(Taints): node节点的属性,通过打标签实现

    污点(Taints)类型:

    • NoSchedule:不要再往该node节点调度了,不影响之前已经存在的pod。
    • PreferNoSchedule:备用。优先往其他node节点调度。
    • NoExecute:清场,驱逐。新pod不许来,老pod全赶走。适用于node节点下线。

    污点(Taints)的 effect 值 NoExecute,它会影响已经在节点上运行的 pod:

    • 如果 pod 不能容忍 effect 值为 NoExecute 的 taint,那么 pod 将马上被驱逐
    • 如果 pod 能够容忍 effect 值为 NoExecute 的 taint,且在 toleration 定义中没有指定 tolerationSeconds,则 pod 会一直在这个节点上运行。
    • 如果 pod 能够容忍 effect 值为 NoExecute 的 taint,但是在toleration定义中指定了 tolerationSeconds,则表示 pod 还能在这个节点上继续运行的时间长度。

    1. 查看node节点标签
    [root@k8s-master ~]# kubectl get nodes --show-labels
    NAME        STATUS     ROLES    AGE   VERSION   LABELS
    10.0.0.12   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux
    10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
    
    1. 添加标签:node角色
    kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node=
    
    1. 查看node节点标签:10.0.0.12的ROLES变为node
    [root@k8s-master ~]# kubectl get nodes --show-labels
    NAME        STATUS     ROLES    AGE   VERSION   LABELS
    10.0.0.12   NotReady   node     17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux,node-role.kubernetes.io/node=
    10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
    
    1. 删除标签
    kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node-
    
    1. 添加标签:硬盘类型
    kubectl label nodes 10.0.0.12 disk=ssd
    kubectl label nodes 10.0.0.13 disk=sata
    
    1. 清除其他pod
    kubectl delete deployments --all
    
    1. 查看当前pod:2个
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    nginx-6459cd46fd-dl2ct   1/1     Running   1          16h   172.18.28.3   10.0.0.12   <none>           <none>
    nginx-6459cd46fd-zfwbg   1/1     Running   0          16h   172.18.98.4   10.0.0.13   <none>           <none>
    

    NoSchedule

    1. 添加污点:基于硬盘类型的NoSchedule
    kubectl taint node 10.0.0.12 disk=ssd:NoSchedule
    
    1. 查看污点
    kubectl describe nodes 10.0.0.12|grep Taint
    
    1. 调整副本数
    kubectl scale deployment nginx --replicas=5
    
    1. 查看pod验证:新增pod都在10.0.0.13上创建
    kubectl get pod -o wide
    
    1. 删除污点
    kubectl taint node 10.0.0.12 disk-
    

    NoExecute

    1. 添加污点:基于硬盘类型的NoExecute
    kubectl taint node 10.0.0.12 disk=ssd:NoExecute
    
    1. 查看pod验证:所有pod都在10.0.0.13上创建,之前10.0.0.12上的pod也转移到10.0.0.13上
    kubectl get pod -o wide
    
    1. 删除污点
    kubectl taint node 10.0.0.12 disk-
    

    PreferNoSchedule

    1. 添加污点:基于硬盘类型的PreferNoSchedule
    kubectl taint node 10.0.0.12 disk=ssd:PreferNoSchedule
    
    1. 调整副本数
    kubectl scale deployment nginx --replicas=2
    kubectl scale deployment nginx --replicas=5
    
    1. 查看pod验证:有部分pod都在10.0.0.12上创建
    kubectl get pod -o wide
    
    1. 删除污点
    kubectl taint node 10.0.0.12 disk-
    

    容忍度

    容忍度(Tolerations):pod.spec的属性,设置了容忍的Pod将可以容忍污点的存在,可以被调度到存在污点的Node上。


    1. 查看解释
    kubectl explain pod.spec.tolerations
    
    1. 配置能够容忍NoExecute污点的deploy资源yaml配置文件
    mkdir -p /root/k8s_yaml/deploy && cd /root/k8s_yaml/deploy
    cat > /root/k8s_yaml/deploy/k8s_deploy.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: nginx
        spec:
          tolerations:
          - key: "disk"
            operator: "Equal"
            value: "ssd"
            effect: "NoExecute"
          containers:
          - name: nginx
            image: nginx:1.13
            ports:
            - containerPort: 80
    EOF
    
    1. 创建deploy资源
    kubectl delete deployments nginx
    kubectl create -f k8s_deploy.yaml
    
    1. 查看当前pod
    kubectl get pod -o wide
    
    1. 添加污点:基于硬盘类型的NoExecute
    kubectl taint node 10.0.0.12 disk=ssd:NoExecute
    
    1. 调整副本数
    kubectl scale deployment nginx --replicas=5
    
    1. 查看pod验证:有部分pod都在10.0.0.12上创建,容忍了污点
    kubectl get pod -o wide
    
    1. 删除污点
    kubectl taint node 10.0.0.12 disk-
    

    pod.spec.tolerations示例

    tolerations:
    - key: "key"
      operator: "Equal"
      value: "value"
      effect: "NoSchedule"
    ---
    tolerations:
    - key: "key"
      operator: "Exists"
      effect: "NoSchedule"
    ---
    tolerations:
    - key: "key"
      operator: "Equal"
      value: "value"
      effect: "NoExecute"
      tolerationSeconds: 3600
    

    说明:

    • 其中key、value、effect要与Node上设置的taint保持一致
    • operator的值为Exists时,将会忽略value;只要有key和effect就行
    • tolerationSeconds:表示pod能够容忍 effect 值为 NoExecute 的 taint;当指定了 tolerationSeconds【容忍时间】,则表示 pod 还能在这个节点上继续运行的时间长度。

    不指定key值和effect值时,且operator为Exists,表示容忍所有的污点【能匹配污点所有的keys,values和effects】

    tolerations:
    - operator: "Exists"
    

    不指定effect值时,则能容忍污点key对应的所有effects情况

    tolerations:
    - key: "key"
      operator: "Exists"
    

    有多个Master存在时,为了防止资源浪费,可以进行如下设置:

    kubectl taint nodes Node-name node-role.kubernetes.io/master=:PreferNoSchedule
    

    常用资源

    pod资源

    pod资源至少由两个容器组成:一个基础容器pod+业务容器

    • 动态pod:从etcd获取yaml文件。

    • 静态pod:kubelet本地目录读取yaml文件。


    1. k8s-node1修改kubelet.service,指定静态pod路径:该目录下只能放置静态pod的yaml配置文件
    sed -i '22a   --pod-manifest-path /etc/kubernetes/manifest \' /usr/lib/systemd/system/kubelet.service
    mkdir /etc/kubernetes/manifest
    systemctl daemon-reload
    systemctl restart kubelet.service
    
    1. k8s-node1创建静态pod的yaml配置文件:静态pod立即被创建,其name增加后缀本机IP
    cat > /etc/kubernetes/manifest/k8s_pod.yaml <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: static-pod
    spec:
      containers:
        - name: nginx
          image: nginx:1.13
          ports:
            - containerPort: 80
    EOF
    
    1. master查看pod
    [root@k8s-master ~]# kubectl get pod
    NAME                     READY   STATUS    RESTARTS   AGE
    nginx-6459cd46fd-dl2ct   1/1     Running   0          51m
    nginx-6459cd46fd-zfwbg   1/1     Running   0          51m
    test-8c7c68d6d-x79hf     1/1     Running   0          51m
    static-pod-10.0.0.12     1/1     Running   0          3s
    

    kubeadm部署k8s基于静态pod。

    静态pod:

    • 创建yaml配置文件,立即自动创建pod。

    • 移走yaml配置文件,立即自动移除pod。


    secret资源

    secret资源是某个namespace的局部资源,含有加密的密码、密钥、证书等。


    k8s对接harbor

    首先搭建Harbor docker镜像仓库,启用https,创建私有仓库。

    然后使用secrets资源管理密钥对,用于拉取镜像时的身份验证。


    首先:deploy在pull镜像时调用secrets

    1. 创建secrets资源regcred
    kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 --docker-email=296917342@qq.com
    
    1. 查看secrets资源
    [root@k8s-master ~]# kubectl get secrets 
    NAME                       TYPE                                  DATA   AGE
    default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
    regcred                    kubernetes.io/dockerconfigjson        1      114s
    
    1. deploy资源调用secrets资源的密钥对pull镜像
    cd /root/k8s_yaml/deploy
    cat > /root/k8s_yaml/deploy/k8s_deploy_secrets.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: nginx
        spec:
          imagePullSecrets:
          - name: regcred
          containers:
          - name: nginx
            image: blog.oldqiang.com/oldboy/nginx:1.13
            ports:
            - containerPort: 80
    EOF
    
    1. 创建deploy资源
    kubectl delete deployments nginx
    kubectl create -f k8s_deploy_secrets.yaml
    
    1. 查看当前pod:资源创建成功
    kubectl get pod -o wide
    

    RBAC:deploy在pull镜像时通过用户调用secrets

    1. 创建secrets资源harbor-secret
    kubectl create secret docker-registry harbor-secret --namespace=default --docker-username=admin --docker-password=a123456 --docker-server=blog.oldqiang.com
    
    1. 创建用户和pod资源的yaml文件
    cd /root/k8s_yaml/deploy
    # 创建用户
    cat > /root/k8s_yaml/deploy/k8s_sa_harbor.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: docker-image
      namespace: default
    imagePullSecrets:
    - name: harbor-secret
    EOF
    # 创建pod
    cat > /root/k8s_yaml/deploy/k8s_pod.yaml <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: static-pod
    spec:
      serviceAccount: docker-image
      containers:
        - name: nginx
          image: blog.oldqiang.com/oldboy/nginx:1.13
          ports:
            - containerPort: 80
    EOF
    
    1. 创建资源
    kubectl delete deployments nginx
    kubectl create -f k8s_sa_harbor.yaml
    kubectl create -f k8s_pod.yaml
    
    1. 查看当前pod:资源创建成功
    kubectl get pod -o wide
    

    configmap资源

    configmap资源用来存放配置文件,可用挂载到pod容器上。


    1. 创建配置文件
    cat > /root/k8s_yaml/deploy/81.conf <<EOF
        server {
            listen       81;
            server_name  localhost;
            root         /html;
            index      index.html index.htm;
            location / {
            }
        }
    EOF
    
    1. 创建configmap资源(可以指定多个--from-file)
    kubectl create configmap 81.conf --from-file=/root/k8s_yaml/deploy/81.conf
    
    1. 查看configmap资源
    kubectl get cm
    kubectl get cm 81.conf -o yaml
    
    1. deploy资源挂载configmap资源
    cd /root/k8s_yaml/deploy
    cat > /root/k8s_yaml/deploy/k8s_deploy_cm.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            app: nginx
        spec:
          volumes:
            - name: nginx-config
              configMap:
                name: 81.conf
                items:
                  - key: 81.conf  # 指定多个配置文件中的一个
                    path: 81.conf
          containers:
          - name: nginx
            image: nginx:1.13
            volumeMounts:
              - name: nginx-config
                mountPath: /etc/nginx/conf.d
            ports:
            - containerPort: 80
              name: port1
            - containerPort: 81
              name: port2
    EOF
    
    1. 创建deploy资源
    kubectl delete deployments nginx
    kubectl create -f k8s_deploy_cm.yaml
    
    1. 查看当前pod
    kubectl get pod -o wide
    
    1. 但是volumeMounts只能挂目录,原有文件会被覆盖,导致80端口不能访问。

    initContainers资源

    在启动pod前,先启动initContainers容器进行初始化操作。


    1. 查看解释
    kubectl explain pod.spec.initContainers
    
    1. deploy资源挂载configmap资源

    初始化操作:

    • 初始化容器一:挂载持久化hostPath和configmap,拷贝81.conf到持久化目录
    • 初始化容器二:挂载持久化hostPath,拷贝default.conf到持久化目录

    最后Deployment容器启动,挂载持久化目录。

    cd /root/k8s_yaml/deploy
    cat > /root/k8s_yaml/deploy/k8s_deploy_init.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          volumes:
            - name: config
              hostPath:
                path: /mnt
            - name: tmp
              configMap:
                name: 81.conf
                items:
                  - key: 81.conf
                    path: 81.conf
          initContainers:
          - name: cp1
            image: nginx:1.13
            volumeMounts:
              - name: config
                mountPath: /nginx_config
              - name: tmp
                mountPath: /tmp
            command: ["cp","/tmp/81.conf","/nginx_config/"]
          - name: cp2
            image: nginx:1.13
            volumeMounts:
              - name: config
                mountPath: /nginx_config
            command: ["cp","/etc/nginx/conf.d/default.conf","/nginx_config/"]
          containers:
          - name: nginx
            image: nginx:1.13
            volumeMounts:
              - name: config
                mountPath: /etc/nginx/conf.d
            ports:
            - containerPort: 80
              name: port1
            - containerPort: 81
              name: port2
    EOF
    
    1. 创建deploy资源
    kubectl delete deployments nginx
    kubectl create -f k8s_deploy_init.yaml
    
    1. 查看当前pod
    kubectl get pod -o wide -l app=nginx
    
    1. 查看存在配置文件:81.conf,default.conf
    kubectl exec -ti nginx-7879567f94-25g5s /bin/bash
    ls /etc/nginx/conf.d
    

    常用服务

    RBAC

    RBAC:role base access controller

    kubernetes的认证访问授权机制RBAC,通过apiserver设置-–authorization-mode=RBAC开启。

    RBAC的授权步骤分为两步:

    1)定义角色:在定义角色时会指定此角色对于资源的访问控制的规则;

    2)绑定角色:将主体与角色进行绑定,对用户进行访问授权。


    用户:sa(ServiceAccount)

    角色:role

    • 局部角色:Role
      • 角色绑定(授权):RoleBinding
    • 全局角色:ClusterRole
      • 角色绑定(授权):ClusterRoleBinding

    K8S RBAC详解


    使用流程图

    RBAC使用流程图

    • 用户使用:如果是用户需求权限,则将Role与User(或Group)绑定(这需要创建User/Group);

    • 程序使用:如果是程序需求权限,将Role与ServiceAccount指定(这需要创建ServiceAccount并且在deployment中指定ServiceAccount)。


    部署dns服务

    部署coredns,官方文档

    1. master节点创建配置文件coredns.yaml(指定调度到node2)
    mkdir -p /root/k8s_yaml/dns && cd /root/k8s_yaml/dns
    cat > /root/k8s_yaml/dns/coredns.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: coredns
      namespace: kube-system
      labels:
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
        addonmanager.kubernetes.io/mode: Reconcile
      name: system:coredns
    rules:
    - apiGroups:
      - ""
      resources:
      - endpoints
      - services
      - pods
      - namespaces
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - get
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
        addonmanager.kubernetes.io/mode: EnsureExists
      name: system:coredns
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:coredns
    subjects:
    - kind: ServiceAccount
      name: coredns
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
      labels:
          addonmanager.kubernetes.io/mode: EnsureExists
    data:
      Corefile: |
        .:53 {
            errors
            health
            kubernetes cluster.local in-addr.arpa ip6.arpa {
                pods insecure
                upstream
                fallthrough in-addr.arpa ip6.arpa
                ttl 30
            }
            prometheus :9153
            forward . /etc/resolv.conf
            cache 30
            loop
            reload
            loadbalance
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/name: "CoreDNS"
    spec:
      # replicas: not specified here:
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      selector:
        matchLabels:
          k8s-app: kube-dns
      template:
        metadata:
          labels:
            k8s-app: kube-dns
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
        spec:
          priorityClassName: system-cluster-critical
          serviceAccountName: coredns
          tolerations:
            - key: "CriticalAddonsOnly"
              operator: "Exists"
          nodeSelector:
            beta.kubernetes.io/os: linux
          nodeName: 10.0.0.13
          containers:
          - name: coredns
            image: coredns/coredns:1.3.1
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                memory: 100Mi
              requests:
                cpu: 100m
                memory: 70Mi
            args: [ "-conf", "/etc/coredns/Corefile" ]
            volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
            - name: tmp
              mountPath: /tmp
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            readinessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - all
              readOnlyRootFilesystem: true
          dnsPolicy: Default
          volumes:
            - name: tmp
              emptyDir: {}
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/name: "CoreDNS"
    spec:
      selector:
        k8s-app: kube-dns
      clusterIP: 10.254.230.254
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
      - name: metrics
        port: 9153
        protocol: TCP
    EOF
    
    1. master节点创建资源(准备镜像:coredns/coredns:1.3.1)
    kubectl create -f coredns.yaml
    
    1. master节点查看pod用户
    kubectl get pod -n kube-system
    kubectl get pod -n kube-system coredns-6cf5d7fdcf-dvp8r -o yaml | grep -i ServiceAccount
    
    1. master节点查看DNS资源coredns用户的全局角色,绑定
    kubectl get clusterrole | grep coredns
    kubectl get clusterrolebindings | grep coredns
    kubectl get sa -n kube-system | grep coredns
    
    1. master节点创建tomcat+mysql的deploy资源yaml文件
    mkdir -p /root/k8s_yaml/tomcat_deploy && cd /root/k8s_yaml/tomcat_deploy
    cat > /root/k8s_yaml/tomcat_deploy/mysql-deploy.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      namespace: tomcat
      name: mysql
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
            - name: mysql
              image: mysql:5.7
              ports:
              - containerPort: 3306
              env:
              - name: MYSQL_ROOT_PASSWORD
                value: '123456'
    EOF
    cat > /root/k8s_yaml/tomcat_deploy/mysql-svc.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      namespace: tomcat
      name: mysql
    spec:
      ports:
        - port: 3306
          targetPort: 3306
      selector:
        app: mysql
    EOF
    cat > /root/k8s_yaml/tomcat_deploy/tomcat-deploy.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      namespace: tomcat
      name: myweb
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: myweb
        spec:
          containers:
            - name: myweb
              image: kubeguide/tomcat-app:v2
              ports:
              - containerPort: 8080
              env:
              - name: MYSQL_SERVICE_HOST
                value: 'mysql'
              - name: MYSQL_SERVICE_PORT
                value: '3306'
    EOF
    cat > /root/k8s_yaml/tomcat_deploy/tomcat-svc.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      namespace: tomcat
      name: myweb
    spec:
      type: NodePort
      ports:
        - port: 8080
          nodePort: 30008
      selector:
        app: myweb
    EOF
    
    1. master节点创建资源(准备镜像:mysql:5.7 和 kubeguide/tomcat-app:v2)
    kubectl create namespace tomcat
    kubectl create -f .
    
    1. master节点验证
    [root@k8s-master tomcat_demo]# kubectl get pod -n tomcat
    NAME                     READY   STATUS    RESTARTS   AGE
    mysql-94f6bbcfd-6nng8    1/1     Running   0          5s
    myweb-5c8956ff96-fnhjh   1/1     Running   0          5s
    [root@k8s-master tomcat_deploy]# kubectl -n tomcat exec -ti myweb-5c8956ff96-fnhjh /bin/bash
    root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# ping mysql
    PING mysql.tomcat.svc.cluster.local (10.254.94.77): 56 data bytes
    ^C--- mysql.tomcat.svc.cluster.local ping statistics ---
    2 packets transmitted, 0 packets received, 100% packet loss
    root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# exit
    exit
    
    1. 验证DNS
    • master节点
    [root@k8s-master deploy]# kubectl get pod -n kube-system -o wide
    NAME                       READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
    coredns-6cf5d7fdcf-dvp8r   1/1     Running   0          177m   172.18.98.2   10.0.0.13   <none>           <none>
    
    yum install bind-utils -y
    dig @172.18.98.2 kubernetes.default.svc.cluster.local +short
    
    • node节点(kube-proxy)
    yum install bind-utils -y
    dig @10.254.230.254 kubernetes.default.svc.cluster.local +short
    

    部署dashboard服务

    1. 官方配置文件,略作修改

    k8s1.15的dashboard-controller.yaml建议使用dashboard1.10.1kubernetes-dashboard.yaml

    mkdir -p /root/k8s_yaml/dashboard && cd /root/k8s_yaml/dashboard
    cat > /root/k8s_yaml/dashboard/kubernetes-dashboard.yaml <<EOF
    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    # ------------------- Dashboard Secret ------------------- #
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kube-system
    type: Opaque
    
    ---
    # ------------------- Dashboard Service Account ------------------- #
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    
    ---
    # ------------------- Dashboard Role & Role Binding ------------------- #
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: kubernetes-dashboard-minimal
      namespace: kube-system
    rules:
      # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["create"]
      # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["create"]
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
    - apiGroups: [""]
      resources: ["secrets"]
      resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
      verbs: ["get", "update", "delete"]
      # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
    - apiGroups: [""]
      resources: ["configmaps"]
      resourceNames: ["kubernetes-dashboard-settings"]
      verbs: ["get", "update"]
      # Allow Dashboard to get metrics from heapster.
    - apiGroups: [""]
      resources: ["services"]
      resourceNames: ["heapster"]
      verbs: ["proxy"]
    - apiGroups: [""]
      resources: ["services/proxy"]
      resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
      verbs: ["get"]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubernetes-dashboard-minimal
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard-minimal
    subjects:
    - kind: ServiceAccount
      name: kubernetes-dashboard
      namespace: kube-system
    
    ---
    # ------------------- Dashboard Deployment ------------------- #
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
          - name: kubernetes-dashboard
            image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
            ports:
            - containerPort: 8443
              protocol: TCP
            args:
              - --auto-generate-certificates
              # Uncomment the following line to manually specify Kubernetes API server Host
              # If not specified, Dashboard will attempt to auto discover the API server and connect
              # to it. Uncomment only if the default does not work.
              # - --apiserver-host=http://my-address:port
            volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
            livenessProbe:
              httpGet:
                scheme: HTTPS
                path: /
                port: 8443
              initialDelaySeconds: 30
              timeoutSeconds: 30
          volumes:
          - name: kubernetes-dashboard-certs
            secret:
              secretName: kubernetes-dashboard-certs
          - name: tmp-volume
            emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
    
    ---
    # ------------------- Dashboard Service ------------------- #
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      type: NodePort
      ports:
        - port: 443
          nodePort: 30001
          targetPort: 8443
      selector:
        k8s-app: kubernetes-dashboard
    EOF
    
    # 镜像改用国内源
    image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
    # service类型改为NodePort:指定宿主机端口
    spec:
    type: NodePort
    ports:
        - port: 443
          nodePort: 30001
          targetPort: 8443
    
    1. 创建资源(准备镜像:registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1)
    kubectl create -f kubernetes-dashboard.yaml
    
    1. 查看当前已存在角色admin
    kubectl get clusterrole | grep admin
    
    1. 创建用户,绑定已存在角色admin(默认用户只有最小权限)
    cat > /root/k8s_yaml/dashboard/dashboard_rbac.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        addonmanager.kubernetes.io/mode: Reconcile
      name: kubernetes-admin
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard-admin
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        addonmanager.kubernetes.io/mode: Reconcile
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kubernetes-admin
      namespace: kube-system
    EOF
    
    1. 创建资源
    kubectl create -f dashboard_rbac.yaml
    
    1. 查看admin角色用户令牌
    [root@k8s-master dashboard]# kubectl describe secrets -n kube-system kubernetes-admin-token-tpqs6 
    Name:         kubernetes-admin-token-tpqs6
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: kubernetes-admin
                  kubernetes.io/service-account.uid: 17f1f684-588a-4639-8ec6-a39c02361d0e
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1354 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA
    
    1. 使用火狐浏览器访问:https://10.0.0.12:30001使用令牌登录
    2. 生成证书,解决Google浏览器不能打开kubernetes dashboard的问题
    mkdir /root/k8s_yaml/dashboard/key && cd /root/k8s_yaml/dashboard/key
    openssl genrsa -out dashboard.key 2048
    openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=10.0.0.11'
    openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
    
    1. 删除原有的证书secret资源
    kubectl delete secret kubernetes-dashboard-certs -n kube-system
    
    1. 创建新的证书secret资源
    kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
    
    1. 删除pod,自动创建新pod生效
    [root@k8s-master key]# kubectl get pod -n kube-system 
    NAME                                    READY   STATUS    RESTARTS   AGE
    coredns-6cf5d7fdcf-dvp8r                1/1     Running   0          4h19m
    kubernetes-dashboard-5dc4c54b55-sn8sv   1/1     Running   0          41m
    
    kubectl delete pod -n kube-system kubernetes-dashboard-5dc4c54b55-sn8sv
    
    1. 使用谷歌浏览器访问:https://10.0.0.12:30001使用令牌登录
    2. 令牌生成kubeconfig,解决令牌登陆快速超时的问题
    DASH_TOKEN='eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA'
    kubectl config set-cluster kubernetes --server=10.0.0.11:6443 --kubeconfig=/root/dashbord-admin.conf
    kubectl config set-credentials admin --token=$DASH_TOKEN --kubeconfig=/root/dashbord-admin.conf
    kubectl config set-context admin --cluster=kubernetes --user=admin --kubeconfig=/root/dashbord-admin.conf
    kubectl config use-context admin --kubeconfig=/root/dashbord-admin.conf
    
    1. 下载到主机,用于以后登录使用
    cd ~
    sz dashbord-admin.conf
    
    1. 使用谷歌浏览器访问:https://10.0.0.12:30001使用kubeconfig文件登录,可以exec

    网络

    映射(endpoints资源)

    1. master节点查看endpoints资源
    [root@k8s-master ~]# kubectl get endpoints 
    NAME         ENDPOINTS        AGE
    kubernetes   10.0.0.11:6443   28h
    ... ...
    

    可用其将外部服务映射到内部使用。每个Service资源自动关连一个endpoints资源,优先标签,然后同名。

    1. k8s-node2准备外部数据库
    yum install mariadb-server -y
    systemctl start mariadb
    mysql_secure_installation
    
    n
    y
    y
    y
    y
    mysql -e "grant all on *.* to root@'%' identified by '123456';"
    

    该项目在tomcat的index.html页面,已经将数据库连接写固定了,用户名root,密码123456。

    1. master节点创建endpoint和svc资源yaml文件
    cd /root/k8s_yaml/tomcat_deploy
    cat > /root/k8s_yaml/tomcat_deploy/mysql_endpoint_svc.yaml <<EOF
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: mysql
      namespace: tomcat
    subsets:
    - addresses:
      - ip: 10.0.0.13
      ports:
      - name: mysql
        port: 3306
        protocol: TCP
    --- 
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      namespace: tomcat
    spec:
      ports:
      - name: mysql
        port: 3306
        protocol: TCP
        targetPort: 3306  
      type: ClusterIP
    EOF
    
    # 可以参考系统默认创建
    kubectl get endpoints kubernetes -o yaml
    kubectl get svc kubernetes -o yaml
    

    注意:此时不能使用标签选择器!

    1. master节点创建资源
    kubectl delete deployment mysql -n tomcat
    kubectl delete svc mysql -n tomcat
    kubectl create -f mysql_endpoint_svc.yaml
    
    1. master节点查看endpoints资源及其与svc的关联
    kubectl get endpoints -n tomcat
    kubectl describe svc -n tomcat
    
    1. 浏览器访问http://10.0.0.12:30008/demo/

    2. k8s-node2查看数据库验证

    [root@k8s-node2 ~]# mysql -e 'show databases;'
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | HPE_APP            |
    | mysql              |
    | performance_schema |
    +--------------------+
    [root@k8s-node2 ~]# mysql -e 'use HPE_APP;select * from T_USERS;'
    +----+-----------+-------+
    | ID | USER_NAME | LEVEL |
    +----+-----------+-------+
    |  1 | me        | 100   |
    |  2 | our team  | 100   |
    |  3 | HPE       | 100   |
    |  4 | teacher   | 100   |
    |  5 | docker    | 100   |
    |  6 | google    | 100   |
    +----+-----------+-------+
    

    kube-proxy的ipvs模式

    1. node节点安装依赖命令
    yum install ipvsadm conntrack-tools -y
    
    1. node节点修改kube-proxy.service增加参数
    cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    [Service]
    ExecStart=/usr/bin/kube-proxy \
      --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
      --cluster-cidr 172.18.0.0/16 \
      --hostname-override 10.0.0.12 \
      --proxy-mode ipvs \
      --logtostderr=false \
      --v=2
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    
    --proxy-mode ipvs  # 启用ipvs模式
    

    LVS默认NAT模式。不满足LVS,自动降级为iptables。

    1. node节点重启kube-proxy并检查LVS规则
    systemctl daemon-reload 
    systemctl restart kube-proxy.service 
    ipvsadm -L -n 
    

    七层负载均衡(ingress-traefik)

    Ingress 包含两大组件:Ingress Controller 和 Ingress。

    • ingress-controller(traefik)服务组件,直接使用宿主机网络。
    • Ingress资源是基于DNS名称(host)或URL路径把请求转发到指定的Service资源的转发规则

    image-20201215232645950


    Ingress-Traefik

    Traefik 是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置。目前支持 Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API 等等后端模型。

    Traefike可观测性方案

    1568743448535


    创建rbac

    1. 创建rbac的yaml文件
    mkdir -p /root/k8s_yaml/ingress && cd /root/k8s_yaml/ingress
    cat > /root/k8s_yaml/ingress/ingress_rbac.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: traefik-ingress-controller
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: traefik-ingress-controller
    rules:
      - apiGroups:
          - ""
        resources:
          - services
          - endpoints
          - secrets
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: traefik-ingress-controller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: traefik-ingress-controller
    subjects:
    - kind: ServiceAccount
      name: traefik-ingress-controller
      namespace: kube-system
    EOF
    
    1. 创建资源
    kubectl create -f ingress_rbac.yaml
    
    1. 查看资源
    kubectl get serviceaccounts -n kube-system | grep traefik-ingress-controller
    kubectl get clusterrole -n kube-system | grep traefik-ingress-controller
    kubectl get clusterrolebindings.rbac.authorization.k8s.io -n kube-system | grep traefik-ingress-controller
    

    部署traefik服务

    1. 创建traefik的DaemonSet资源yaml文件
    cat > /root/k8s_yaml/ingress/ingress_traefik.yaml <<EOF
    kind: DaemonSet
    apiVersion: extensions/v1beta1
    metadata:
      name: traefik-ingress-controller
      namespace: kube-system
      labels:
        k8s-app: traefik-ingress-lb
    spec:
      selector:
        matchLabels:
          k8s-app: traefik-ingress-lb
      template:
        metadata:
          labels:
            k8s-app: traefik-ingress-lb
            name: traefik-ingress-lb
        spec:
          serviceAccountName: traefik-ingress-controller
          terminationGracePeriodSeconds: 60
          tolerations:
          - operator: "Exists"
          #nodeSelector:
            #kubernetes.io/hostname: master
          # 允许使用主机网络,指定主机端口hostPort
          hostNetwork: true
          containers:
          - image: traefik:v1.7.2
            imagePullPolicy: IfNotPresent
            name: traefik-ingress-lb
            ports:
            - name: http
              containerPort: 80
              hostPort: 80
            - name: admin
              containerPort: 8080
              hostPort: 8080
            args:
            - --api
            - --kubernetes
            - --logLevel=DEBUG
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: traefik-ingress-service
      namespace: kube-system
    spec:
      selector:
        k8s-app: traefik-ingress-lb
      ports:
        - protocol: TCP
          port: 80
          name: web
        - protocol: TCP
          port: 8080
          name: admin
      type: NodePort
    EOF
    
    1. 创建资源(准备镜像:traefik:v1.7.2)
    kubectl create -f ingress_traefik.yaml
    
    1. 浏览器访问 traefik 的 dashboardhttp://10.0.0.12:8080 此时没有server。

    创建Ingress资源

    1. 查看要代理的svc资源的NAME和POST
    [root@k8s-master ingress]# kubectl get svc -n tomcat 
    NAME    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
    mysql   ClusterIP   10.254.71.221    <none>        3306/TCP         4h2m
    myweb   NodePort    10.254.130.141   <none>        8080:30008/TCP   8h
    
    1. 创建Ingress资源yaml文件
    cat > /root/k8s_yaml/ingress/ingress.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: traefik-myweb
      namespace: tomcat
      annotations:
        kubernetes.io/ingress.class: traefik
    spec:
      rules:
      - host: tomcat.oldqiang.com
        http:
          paths:
          - backend:
              serviceName: myweb
              servicePort: 8080
    EOF
    
    1. 创建资源
    kubectl create -f ingress.yaml
    
    1. 查看资源
    kubectl get ingress -n tomcat
    

    测试访问

    1. windows配置:在C:WindowsSystem32driversetchosts文件中增加10.0.0.12 tomcat.oldqiang.com

    2. 浏览器直接访问tomcat:http://tomcat.oldqiang.com/demo/

    image-20201215205523416

    1. 浏览器访问:http://10.0.0.12:8080 此时BACKENDS(后端)有Server

    image-20201215205417151

    image-20201215205446740


    七层负载均衡(ingress-nginx)

    img

    五个基础yaml文件:

    • Namespace
    • ConfigMap
    • RBAC
    • Service:添加NodePort端口
    • Deployment:默认404页面,改用国内阿里云镜像
    • Deployment:ingress-controller,改用国内阿里云镜像
    1. 准备配置文件
    mkdir /root/k8s_yaml/ingress-nginx && cd /root/k8s_yaml/ingress-nginx
    # 创建命名空间 ingress-nginx
    cat > /root/k8s_yaml/ingress-nginx/namespace.yaml <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
      name: ingress-nginx
    EOF
    # 创建配置资源
    cat > /root/k8s_yaml/ingress-nginx/configmap.yaml <<EOF
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: nginx-configuration
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    EOF
    # 如果外界访问的域名不存在的话,则默认转发到default-http-backend这个Service,直接返回404:
    cat > /root/k8s_yaml/ingress-nginx/default-backend.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: default-http-backend
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
      namespace: ingress-nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: default-http-backend
          app.kubernetes.io/part-of: ingress-nginx
      template:
        metadata:
          labels:
            app.kubernetes.io/name: default-http-backend
            app.kubernetes.io/part-of: ingress-nginx
        spec:
          terminationGracePeriodSeconds: 60
          containers:
            - name: default-http-backend
              # Any image is permissible as long as:
              # 1. It serves a 404 page at /
              # 2. It serves 200 on a /healthz endpoint
              # 改用国内阿里云镜像
              image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: 8080
                  scheme: HTTP
                initialDelaySeconds: 30
                timeoutSeconds: 5
              ports:
                - containerPort: 8080
              resources:
                limits:
                  cpu: 10m
                  memory: 20Mi
                requests:
                  cpu: 10m
                  memory: 20Mi
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: default-http-backend
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      ports:
        - port: 80
          targetPort: 8080
      selector:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    EOF
    # 创建Ingress的RBAC授权控制,包括:
    # ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding
    cat > /root/k8s_yaml/ingress-nginx/rbac.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nginx-ingress-serviceaccount
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: nginx-ingress-clusterrole
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    rules:
      - apiGroups:
          - ""
        resources:
          - configmaps
          - endpoints
          - nodes
          - pods
          - secrets
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - services
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - "extensions"
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - create
          - patch
      - apiGroups:
          - "extensions"
        resources:
          - ingresses/status
        verbs:
          - update
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: Role
    metadata:
      name: nginx-ingress-role
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    rules:
      - apiGroups:
          - ""
        resources:
          - configmaps
          - pods
          - secrets
          - namespaces
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - configmaps
        resourceNames:
          # Defaults to "<election-id>-<ingress-class>"
          # Here: "<ingress-controller-leader>-<nginx>"
          # This has to be adapted if you change either parameter
          # when launching the nginx-ingress-controller.
          - "ingress-controller-leader-nginx"
        verbs:
          - get
          - update
      - apiGroups:
          - ""
        resources:
          - configmaps
        verbs:
          - create
      - apiGroups:
          - ""
        resources:
          - endpoints
        verbs:
          - get
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: nginx-ingress-role-nisa-binding
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: nginx-ingress-role
    subjects:
      - kind: ServiceAccount
        name: nginx-ingress-serviceaccount
        namespace: ingress-nginx
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: nginx-ingress-clusterrole-nisa-binding
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: nginx-ingress-clusterrole
    subjects:
      - kind: ServiceAccount
        name: nginx-ingress-serviceaccount
        namespace: ingress-nginx
    EOF
    # 创建ingress-controller。将新加入的Ingress进行转化为Nginx的配置。
    cat > /root/k8s_yaml/ingress-nginx/with-rbac.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx-ingress-controller
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/part-of: ingress-nginx
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/part-of: ingress-nginx
          annotations:
            prometheus.io/port: "10254"
            prometheus.io/scrape: "true"
        spec:
          serviceAccountName: nginx-ingress-serviceaccount
          containers:
            - name: nginx-ingress-controller
              # 改用国内阿里云镜像
              image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
              args:
                - /nginx-ingress-controller
                - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
                - --configmap=$(POD_NAMESPACE)/nginx-configuration
                - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
                - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
                - --publish-service=$(POD_NAMESPACE)/ingress-nginx
                - --annotations-prefix=nginx.ingress.kubernetes.io
              securityContext:
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                # www-data -> 33
                runAsUser: 33
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              ports:
                - name: http
                  containerPort: 80
                - name: https
                  containerPort: 443
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
    EOF
    # 创建Service资源,对外提供服务
    cat > /root/k8s_yaml/ingress-nginx/service-nodeport.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: ingress-nginx
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 32080  # http
        - name: https
          port: 443
          targetPort: 443
          protocol: TCP
          nodePort: 32443  # https
      selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    EOF
    
    1. 所有node节点准备镜像
    docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
    docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
    docker images
    
    1. 创建资源
    kubectl create -f namespace.yaml
    kubectl create -f configmap.yaml
    kubectl create -f rbac.yaml
    kubectl create -f default-backend.yaml
    kubectl create -f with-rbac.yaml
    kubectl create -f service-nodeport.yaml
    
    1. 查看ingress-nginx组件状态
    kubectl get all -n ingress-nginx
    
    1. 访问http://10.0.0.12:32080/
    [root@k8s-master ingress-nginx]# curl 10.0.0.12:32080
    default backend - 404
    
    1. 准备后端Service,创建Deployment资源(nginx)
    cat > /root/k8s_yaml/ingress-nginx/deploy-demon.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: myapp-nginx
    spec:
      selector:
        app: myapp-nginx
        release: canary
      ports:
      - name: http
        port: 80
        targetPort: 80
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata: 
      name: nginx-deploy
    spec:
      replicas: 2
      selector: 
        matchLabels:
          app: myapp-nginx
          release: canary
      template:
        metadata:
          labels:
            app: myapp-nginx
            release: canary
        spec:
          containers:
          - name: myapp-nginx
            image: nginx:1.13
            ports:
            - name: httpd
              containerPort: 80
    EOF
    
    1. 创建资源(准备镜像:nginx:1.13)
    kubectl apply -f deploy-demon.yaml
    
    1. 查看资源
    kubectl get all
    
    1. 创建ingress资源:将nginx加入ingress-nginx中
    cat > /root/k8s_yaml/ingress-nginx/ingress-myapp.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-myapp
      annotations: 
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: myapp.oldqiang.com
        http:
          paths:
          - path: 
            backend:
              serviceName: myapp-nginx
              servicePort: 80
    EOF
    
    1. 创建资源
    kubectl apply -f ingress-myapp.yaml
    
    1. 查看资源
    kubectl get ingresses
    
    1. windows配置:在C:WindowsSystem32driversetchosts文件中增加10.0.0.12 myapp.oldqiang.com
    2. 浏览器直接访问http://myapp.oldqiang.com:32080/,显示nginx欢迎页
    3. 修改nginx页面以便区分
    [root@k8s-master ingress-nginx]# kubectl get pod
    NAME                           READY   STATUS    RESTARTS   AGE
    nginx-deploy-6b4c84588-crgvr   1/1     Running   0          22m
    nginx-deploy-6b4c84588-krvwz   1/1     Running   0          22m
    
    kubectl exec -ti nginx-deploy-6b4c84588-crgvr /bin/bash
    echo web1 > /usr/share/nginx/html/index.html
    exit
    
    kubectl exec -ti nginx-deploy-6b4c84588-krvwz /bin/bash
    echo web2 > /usr/share/nginx/html/index.html
    exit
    
    1. 浏览器访问http://myapp.oldqiang.com:32080/,刷新测试负载均衡

    image-20201215225826142


    弹性伸缩

    heapster监控

    参考heapster1.5.4官方配置文件

    1. 查看已存在默认角色heapster
    kubectl get clusterrole | grep heapster
    
    1. 创建heapster所需RBAC、Service和Deployment的yaml文件
    mkdir /root/k8s_yaml/heapster/ && cd /root/k8s_yaml/heapster/
    cat > /root/k8s_yaml/heapster/heapster.yaml <<EOF
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: heapster
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:heapster
    subjects:
    - kind: ServiceAccount
      name: heapster
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: heapster
      namespace: kube-system
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: heapster
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: heapster
        spec:
          serviceAccountName: heapster
          containers:
          - name: heapster
            image: registry.aliyuncs.com/google_containers/heapster-amd64:v1.5.3
            imagePullPolicy: IfNotPresent
            command:
            - /heapster
            - --source=kubernetes:https://kubernetes.default
            - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        task: monitoring
        # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
        # If you are NOT using this as an addon, you should comment out this line.
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: Heapster
      name: heapster
      namespace: kube-system
    spec:
      ports:
      - port: 80
        targetPort: 8082
      selector:
        k8s-app: heapster
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: monitoring-grafana
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: grafana
        spec:
          containers:
          - name: grafana
            image: registry.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3
            ports:
            - containerPort: 3000
              protocol: TCP
            volumeMounts:
            - mountPath: /etc/ssl/certs
              name: ca-certificates
              readOnly: true
            - mountPath: /var
              name: grafana-storage
            env:
            - name: INFLUXDB_HOST
              value: monitoring-influxdb
            - name: GF_SERVER_HTTP_PORT
              value: "3000"
              # The following env variables are required to make Grafana accessible via
              # the kubernetes api-server proxy. On production clusters, we recommend
              # removing these env variables, setup auth for grafana, and expose the grafana
              # service using a LoadBalancer or a public IP.
            - name: GF_AUTH_BASIC_ENABLED
              value: "false"
            - name: GF_AUTH_ANONYMOUS_ENABLED
              value: "true"
            - name: GF_AUTH_ANONYMOUS_ORG_ROLE
              value: Admin
            - name: GF_SERVER_ROOT_URL
              # If you're only using the API Server proxy, set this value instead:
              # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
              value: /
          volumes:
          - name: ca-certificates
            hostPath:
              path: /etc/ssl/certs
          - name: grafana-storage
            emptyDir: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
        # If you are NOT using this as an addon, you should comment out this line.
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: monitoring-grafana
      name: monitoring-grafana
      namespace: kube-system
    spec:
      # In a production setup, we recommend accessing Grafana through an external Loadbalancer
      # or through a public IP.
      # type: LoadBalancer
      # You could also use NodePort to expose the service at a randomly-generated port
      # type: NodePort
      ports:
      - port: 80
        targetPort: 3000
      selector:
        k8s-app: grafana
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: monitoring-influxdb
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: influxdb
        spec:
          containers:
          - name: influxdb
            image: registry.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3
            volumeMounts:
            - mountPath: /data
              name: influxdb-storage
          volumes:
          - name: influxdb-storage
            emptyDir: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        task: monitoring
        # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
        # If you are NOT using this as an addon, you should comment out this line.
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: monitoring-influxdb
      name: monitoring-influxdb
      namespace: kube-system
    spec:
      ports:
      - port: 8086
        targetPort: 8086
      selector:
        k8s-app: influxdb
    EOF
    
    1. 创建资源
    kubectl create -f heapster.yaml
    
    1. 高版本k8s已经不建议使用heapster弹性伸缩,配置强制开启:
    kube-controller-manager 
    --horizontal-pod-autoscaler-use-rest-clients=false
    
    sed -i '8a   --horizontal-pod-autoscaler-use-rest-clients=false \' /usr/lib/systemd/system/kube-controller-manager.service
    
    1. 创建业务资源
    cd /root/k8s_yaml/deploy
    cat > /root/k8s_yaml/deploy/k8s_deploy3.yaml <<EOF
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.13
            ports:
            - containerPort: 80
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 100m
                memory: 50Mi
    EOF
    kubectl create -f k8s_deploy3.yaml
    
    1. 创建HPA规则
    kubectl autoscale deploy nginx --max=6 --min=1 --cpu-percent=5
    
    1. 查看资源
    kubectl get pod
    kubectl get hpa
    
    1. 清除heapster资源,和metric-server不能兼容
    kubectl delete -f heapster.yaml
    kubectl delete hpa nginx
    # 还原kube-controller-manager.service配置
    
    1. 当node节点NotReady时,强制删除pod
    kubectl delete -n kube-system pod Pod_Name --force --grace-period 0
    

    metric-server

    metrics-server Github 1.15

    1. 准备yaml文件,使用国内镜像地址(2个),修改一些其他参数
    mkdir -p /root/k8s_yaml/metrics/ && cd /root/k8s_yaml/metrics/
    cat <<EOF > /root/k8s_yaml/metrics/auth-rbac.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: metrics-server-auth-reader
      namespace: kube-system
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: metrics-server:system:auth-delegator
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: system:metrics-server
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - "extensions"
      resources:
      - deployments
      verbs:
      - get
      - list
      - update
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: system:metrics-server
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    EOF
    cat <<EOF > /root/k8s_yaml/metrics/metrics-apiservice.yaml
    apiVersion: apiregistration.k8s.io/v1beta1
    kind: APIService
    metadata:
      name: v1beta1.metrics.k8s.io
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      service:
        name: metrics-server
        namespace: kube-system
      group: metrics.k8s.io
      version: v1beta1
      insecureSkipTLSVerify: true
      groupPriorityMinimum: 100
      versionPriority: 100
    EOF
    cat <<EOF > /root/k8s_yaml/metrics/metrics-server.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: metrics-server
      namespace: kube-system
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: metrics-server-config
      namespace: kube-system
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: EnsureExists
    data:
      NannyConfiguration: |-
        apiVersion: nannyconfig/v1alpha1
        kind: NannyConfiguration
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: metrics-server-v0.3.3
      namespace: kube-system
      labels:
        k8s-app: metrics-server
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        version: v0.3.3
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
          version: v0.3.3
      template:
        metadata:
          name: metrics-server
          labels:
            k8s-app: metrics-server
            version: v0.3.3
          annotations:
            scheduler.alpha.kubernetes.io/critical-pod: ''
            seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
        spec:
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          containers:
          - name: metrics-server
            image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
            command:
            - /metrics-server
            - --metric-resolution=30s
            # These are needed for GKE, which doesn't support secure communication yet.
            # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
            #- --kubelet-port=10255
            #- --deprecated-kubelet-completely-insecure=true
            - --kubelet-insecure-tls
            - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
            ports:
            - containerPort: 443
              name: https
              protocol: TCP
          - name: metrics-server-nanny
            image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
            resources:
              limits:
                cpu: 100m
                memory: 300Mi
              requests:
                cpu: 5m
                memory: 50Mi
            env:
              - name: MY_POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: MY_POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            volumeMounts:
            - name: metrics-server-config-volume
              mountPath: /etc/config
            command:
              - /pod_nanny
              - --config-dir=/etc/config
              #- --cpu=80m
              - --extra-cpu=0.5m
              #- --memory=80Mi
              #- --extra-memory=8Mi
              - --threshold=5
              - --deployment=metrics-server-v0.3.3
              - --container=metrics-server
              - --poll-period=300000
              - --estimator=exponential
              - --minClusterSize=2
              # Specifies the smallest cluster (defined in number of nodes)
              # resources will be scaled to.
              #- --minClusterSize={{ metrics_server_min_cluster_size }}
          volumes:
            - name: metrics-server-config-volume
              configMap:
                name: metrics-server-config
          tolerations:
            - key: "CriticalAddonsOnly"
              operator: "Exists"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: metrics-server
      namespace: kube-system
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "Metrics-server"
    spec:
      selector:
        k8s-app: metrics-server
      ports:
      - port: 443
        protocol: TCP
        targetPort: https
    EOF
    

    下载指定配置文件:

    for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.15.0/cluster/addons/metrics-server/$file;done
    
    # 使用国内镜像
    image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
    command:
            - /metrics-server
            - --metric-resolution=30s
    # 不验证客户端证书
            - --kubelet-insecure-tls
    # 默认解析主机名,coredns中没有物理机的主机名解析,指定使用IP
            - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
    ... ...
    # 使用国内镜像
            image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
            command:
              - /pod_nanny
              - --config-dir=/etc/config
              #- --cpu=80m
              - --extra-cpu=0.5m
              #- --memory=80Mi
              #- --extra-memory=8Mi
              - --threshold=5
              - --deployment=metrics-server-v0.3.3
              - --container=metrics-server
              - --poll-period=300000
              - --estimator=exponential
              - --minClusterSize=2
    # 添加 node/stats 权限
    kind: ClusterRole
    metadata:
      name: system:metrics-server
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats
    

    不加上述参数,可能报错:

    unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-node02: unable to fetch metrics from Kubelet k8s-node02 (10.10.0.13): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:k8s-node01: unable to fetch metrics from Kubelet k8s-node01 (10.10.0.12): request failed - "401 Unauthorized", response: "Unauthorized"]
    
    1. 创建资源(准备镜像:registry.aliyuncs.com/google_containers/addon-resizer:1.8.5和registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3)
    kubectl create -f .
    
    1. 查看资源,使用-l指定标签
    kubectl get pod -n kube-system -l k8s-app=metrics-server
    
    1. 查看资源监控:报错
    kubectl top nodes
    
    1. 注意:二进制安装需要在master节点安装kubelet、kube-proxy、docker-ce。并将master节点加入进群worker node节点。否则有可能会无法连接metrics-server而报错timeout。
    kubectl get apiservices v1beta1.metrics.k8s.io -o yaml
    
    # 报错信息:mertics无法与 apiserver服务通信
    "metrics-server error "Client.Timeout exceeded while awaiting headers"
    
    1. 其他报错查看api,日志
    kubectl describe apiservice v1beta1.metrics.k8s.io
    kubectl get pods -n kube-system | grep 'metrics'
    kubectl logs metrics-server-v0.3.3-6b7c586ffd-7b4n4 metrics-server -n kube-system
    
    1. 修改kube-apiserver.service开启聚合层,使用证书
    cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=etcd.service
    [Service]
    ExecStart=/usr/sbin/kube-apiserver \
      --audit-log-path /var/log/kubernetes/audit-log \
      --audit-policy-file /etc/kubernetes/audit.yaml \
      --authorization-mode RBAC \
      --client-ca-file /etc/kubernetes/ca.pem \
      --requestheader-client-ca-file /etc/kubernetes/ca.pem \
      --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
      --etcd-cafile /etc/kubernetes/ca.pem \
      --etcd-certfile /etc/kubernetes/client.pem \
      --etcd-keyfile /etc/kubernetes/client-key.pem \
      --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
      --service-account-key-file /etc/kubernetes/ca-key.pem \
      --service-cluster-ip-range 10.254.0.0/16 \
      --service-node-port-range 30000-59999 \
      --kubelet-client-certificate /etc/kubernetes/client.pem \
      --kubelet-client-key /etc/kubernetes/client-key.pem \
      --proxy-client-cert-file=/etc/kubernetes/client.pem \
      --proxy-client-key-file=/etc/kubernetes/client-key.pem \
      --requestheader-allowed-names= \
      --requestheader-extra-headers-prefix=X-Remote-Extra- \
      --requestheader-group-headers=X-Remote-Group \
      --requestheader-username-headers=X-Remote-User \
      --log-dir /var/log/kubernetes/ \
      --logtostderr=false \
      --tls-cert-file /etc/kubernetes/apiserver.pem \
      --tls-private-key-file /etc/kubernetes/apiserver-key.pem \
      --v 2
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    systemctl daemon-reload
    systemctl restart kube-apiserver.service
    
    # 开启聚合层,使用证书
    --requestheader-client-ca-file /etc/kubernetes/ca.pem \ # 已配置
    --proxy-client-cert-file=/etc/kubernetes/client.pem \
    --proxy-client-key-file=/etc/kubernetes/client-key.pem \
    --requestheader-allowed-names= \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    

    注:如果 --requestheader-allowed-names 不为空,则--proxy-client-cert-file 证书的 CN 必须位于 allowed-names 中,默认为 aggregator

      如果 kube-apiserver 主机没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数。

    注意:kube-apiserver不开启聚合层会报错:

    I0109 05:55:43.708300       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
    Error: cluster doesn't provide requestheader-client-ca-file
    
    1. 每个节点修改kubelet.service检查:否则无法正常获取节点主机或者pod的资源使用情况
    • 删除--read-only-port=0
    • 添加--authentication-token-webhook=true
    cat > /usr/lib/systemd/system/kubelet.service <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service multi-user.target
    Requires=docker.service
    [Service]
    ExecStart=/usr/bin/kubelet \
      --anonymous-auth=false \
      --cgroup-driver systemd \
      --cluster-dns 10.254.230.254 \
      --cluster-domain cluster.local \
      --runtime-cgroups=/systemd/system.slice \
      --kubelet-cgroups=/systemd/system.slice \
      --fail-swap-on=false \
      --client-ca-file /etc/kubernetes/ca.pem \
      --tls-cert-file /etc/kubernetes/kubelet.pem \
      --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
      --hostname-override 10.0.0.12 \
      --image-gc-high-threshold 90 \
      --image-gc-low-threshold 70 \
      --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
      --authentication-token-webhook=true \
      --log-dir /var/log/kubernetes/ \
      --pod-infra-container-image t29617342/pause-amd64:3.0 \
      --logtostderr=false \
      --v=2
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    systemctl daemon-reload
    systemctl restart kubelet.service
    
    1. 重新部署(生成自签发证书)
    cd /root/k8s_yaml/metrics/
    kubectl delete -f .
    kubectl create -f .
    
    1. 查看资源监控
    [root@k8s-master metrics]# kubectl top nodes
    NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
    10.0.0.11   99m          9%     644Mi           73%       
    10.0.0.12   56m          5%     1294Mi          68%       
    10.0.0.13   44m          4%     622Mi           33%
    

    动态存储

    搭建NFS提供静态存储

    1. 所有节点安装nfs-utils
    yum -y install nfs-utils
    
    1. master节点部署nfs服务
    mkdir -p /data/tomcat-db
    cat > /etc/exports <<EOF
    /data    10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
    EOF
    systemctl start nfs
    
    1. 所有node节点检查挂载
    showmount -e 10.0.0.11
    

    配置动态存储

    创建PVC时,系统自动创建PV

    1. 准备存储类SC资源及其依赖的Deployment和RBAC的yaml文件

    mkdir /root/k8s_yaml/storageclass/ && cd /root/k8s_yaml/storageclass/
    # 实现自动创建PV功能,提供存储类SC
    cat > /root/k8s_yaml/storageclass/nfs-client.yaml <<EOF
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 10.0.0.11
                - name: NFS_PATH
                  value: /data
          volumes:
            - name: nfs-client-root
              nfs:
                server: 10.0.0.13
                path: /data
    EOF
    # RBAC
    cat > /root/k8s_yaml/storageclass/nfs-client-rbac.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
    
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["list", "watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    EOF
    # 创建SC资源,基于nfs-client-provisioner
    cat > /root/k8s_yaml/storageclass/nfs-client-class.yaml <<EOF
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: course-nfs-storage
    provisioner: fuseim.pri/ifs
    EOF
    
    1. 创建资源(准备镜像:quay.io/external_storage/nfs-client-provisioner:latest)
    kubectl create -f .
    
    1. 创建pvc资源:yaml文件增加属性annotations(可以设为默认属性)
    cat > /root/k8s_yaml/storageclass/test_pvc1.yaml <<EOF
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc1
      annotations:
        volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
    EOF
    

    Jenkins对接k8s

    Jenkins部署在物理机(常修改),k8s现在有了身份认证:

    • 方案一:Jenkins安装k8s身份认证插件
    • 方案二:远程控制k8s:同版本kubectl,指定kubelet客户端认证凭据
    kubectl --kubeconfig='kubelet.kubeconfig' get nodes
    

    kubeadm的凭据位于/etc/kubernetes/admin.conf


  • 相关阅读:
    C#中Invoke的用法(转)
    C#中Thread.IsBackground 属性
    127.0.0.1是什么地址?
    C# Socket服务器端如何判断客户端断开求解
    C#中线程间操作无效: 从不是创建控件 txtBOX 的线程访问它。
    C#多线程学习之如何操纵一个线程
    利用TCP协议,实现基于Socket的小聊天程序(初级版)
    进程与线程的一个简单解释
    javascript修改css样式表
    html根据下拉框选中的值修改背景颜色
  • 原文地址:https://www.cnblogs.com/backups/p/k8s_2.html
Copyright © 2020-2023  润新知