• 手动部署 kubernetes 1.9 记录


    前言

    目前 kubernetes 正式版本已经到1.10版本。因为前面有大佬(漠然)已经采完坑,所以自己也试着部署 kubernetes 1.9 体验下该版本的新特性。对于前面部署的 kubernetes 1.7 HA版本而言,本质上变化不大。主要是总结一下某些参数的变动以及其他组件的部署。

    一、相关配置变更

    1.1 关于 API SERVER 配置出现的变动

    • 移除了 --runtime-config=rbac.authorization.k8s.io/v1beta1 配置,因为 RBAC 已经稳定,被纳入了 v1 api,不再需要指定开启;
    • --authorization-mode 授权模型增加了 Node 参数,因为 1.8 后默认 system:node role 不会自动授予 system:nodes 组;
    • 其中准入控制器(admission control)选项名称变为了 --enable-admission-plugins,--admission-control 同时增加了NodeRestriction 参数;
    • 增加 --audit-policy-file 参数用于指定高级审计配置;
    • 移除 --experimental-bootstrap-token-auth 参数,更换为 --enable-bootstrap-token-auth;

    个人apiserver配置参考如下:

    [root@master01 ~]# cat /etc/kubernetes/apiserver 
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
     
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--advertise-address=192.168.133.128 --insecure-bind-address=127.0.0.1 --bind-address=192.168.133.128"
     
    # The port on the local server to listen on.
    KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"
     
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
     
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.133.128:2379,https://192.168.133.129:2379,https://192.168.133.130:2379"
     
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
     
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota,NodeRestriction"
    
    # Add your own!
    KUBE_API_ARGS="--authorization-mode=RBAC,Node 
                   --anonymous-auth=false 
                   --kubelet-https=true 
                   --enable-bootstrap-token-auth 
                   --token-auth-file=/etc/kubernetes/ssl/token.csv 
                   --service-node-port-range=30000-50000 
                   --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem 
                   --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem 
                   --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                   --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                   --audit-policy-file=/etc/kubernetes/ssl/audit-policy.yaml 
                   --etcd-quorum-read=true 
                   --storage-backend=etcd3 
                   --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem 
                   --etcd-certfile=/etc/etcd/ssl/etcd.pem 
                   --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem 
                   --etcd-compaction-interval=5m0s 
                   --enable-swagger-ui=true 
                   --enable-garbage-collector 
                   --enable-logs-handler 
                   --kubelet-timeout=3s 
                   --apiserver-count=3 
                   --audit-log-maxage=30 
                   --audit-log-maxbackup=3 
                   --audit-log-maxsize=100 
                   --audit-log-path=/var/log/kube-audit/audit.log 
                   --event-ttl=1h 
                   --enable-swagger-ui 
                   --log-flush-frequency=5s"

    1.2 关于 controller-manager 配置变动

    • 默认已开启了证书轮换能力用于自动签署 kueblet 证书,并且证书时间也设置了 10 年,可自行调整(--experimental-cluster-signing-duration=86700h0m0s);
    • 增加了 --controllers (--controllers=*,bootstrapsigner,tokencleaner)选项以指定开启全部控制器;

    个人controller-manager配置参考如下:

    # The following values are used to configure the kubernetes controller-manager
    
    # defaults from config and apiserver should be adequate
    
    # Add your own!
    KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 
                                  --service-cluster-ip-range=10.254.0.0/16 
                                  --cluster-name=kubernetes 
                                  --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                                  --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem 
                                  --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem 
                                  --controllers=*,bootstrapsigner,tokencleaner 
                                  --deployment-controller-sync-period=10s 
                                  --experimental-cluster-signing-duration=86700h0m0s 
                                  --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem 
                                  --leader-elect=true 
                                  --node-monitor-grace-period=40s 
                                  --node-monitor-period=5s 
                                  --pod-eviction-timeout=5m0s 
                                  --feature-gates=RotateKubeletServerCertificate=true"

    1.3 关于 scheduler 配置变动

    • 恢复默认的领导选举(leader-elect=true),参考v1.9.5变更日志;

    个人scheduler配置参考如下:

    [root@master01 ~]# cat /etc/kubernetes/scheduler 
    ###
    # kubernetes scheduler config
     
    # default config should be adequate
     
    # Add your own!
    KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0 
    		     --algorithm-provider=DefaultProvider"

    更多细节请关注changelog以及官方手册:https://v1-9.docs.kubernetes.io/docs/reference/generated/kubelet/

    二、网络插件部署

    2.1 Calico 简介

    Calico 是一个纯三层的数据中心网咯方案,不需要overlay。并且对OpenStack、kubernetes、AWS等有良好的集成。Calico 在每个节点利用Linux Kernel实现一个高效的vRouter来负责数据转发,而每个vRouter通过BGP协议负责把自己运行的workload路由信息向整个Calico网络内传播。小规模部署可以直接互联,大规模部署下可通过制定的BGP route reflector来完成。这样保证最终所有的workload之间的数据流量都可以通过IP路由的方式完成互联。Calico节点组网可以直接利用数据中心的网络结构(无论是L2还是L3),无需额外的NAT或者Overlay Network。

    此外,Calico基于iptables还提供了丰富而灵活的网络Policy,保证通过各个节点上的ACLs来提供Workload的多租户隔离、安全组以及其他可达性限制等功能。

    Calico 核心组件:

    • Felix,Calico Agent,跑在每台需要运行Workload节点上,主要负责配置路由及ACL等信息来确保Endpoint的连通状态;
    • etcd,分布式键值存储,主要负责网络元数据一致性,确保Calico网络状态的准确性;
    • BGP Client(BIRD),主要负责把Felix写入Kernel的路由信息分发到当前Calico网络,确保Workload间的通信有效性;
    • BGP Route Reflector(BIRD),大规模部署时使用,摒弃所有节点互联的mesh模式,通过一个或者多个BGP Route Reflector来完成集中式的路由分发;
    • calico/calico-ipam,主要用作kubernetes的CNI插件;

    IP-in-IP
    Calico控制平面的设计要求物理网络得是L2 Fabric,这样vRouter间都是直接可达的,路由不需要把物理设备当做下一跳。为了支持L3 Fabric,Calico推出了IPinIP的选项。

    2.2 Calico 安装

    关于calico的部署,官方推荐 "Standard Hosted Install" 安装方式,及所有组件通过kubernetes去管理服务。还有另一种就是在Kubernetes上安装Calico以集成定制配置管理所需的组件。关于Standard Hosted Install方式安装就是将 calico-node/calico-cni/calico-kube-controller 全部通过kubernetes去管理、部署,而另一种方式 systemd 通过 docker 启动calico-node,而 calico-cni 则是通过二进制文件以及手动设置网络来实现的。calico-kube-controller 还是通过 kubernetes 部署。具体安装配置参考 Calico 官方文档。

    2.2.1 创建 calico-node systemd文件

    cat << EOF > /usr/lib/systemd/system/calico-node.service
    [Unit]
    Description=calico node
    After=docker.service
    Requires=docker.service
    
    [Service]
    User=root
    Environment=ETCD_ENDPOINTS=https://172.16.204.131:2379
    PermissionsStartOnly=true
    ExecStart=/usr/bin/docker run   --net=host --privileged --name=calico-node \
                                    -e ETCD_ENDPOINTS=${ETCD_ENDPOINTS} \
                                    -e ETCD_CA_CERT_FILE=/etc/etcd/ssl/etcd-root-ca.pem \
                                    -e ETCD_CERT_FILE=/etc/etcd/ssl/etcd.pem \
                                    -e ETCD_KEY_FILE=/etc/etcd/ssl/etcd-key.pem \
                                    -e NODENAME=node01 \
                                    -e IP= \
                                    -e IP6= \
                                    -e NO_DEFAULT_POOLS= \
                                    -e AS= \
                                    -e CALICO_IPV4POOL_CIDR=10.20.0.0/16 \
                                    -e CALICO_IPV4POOL_IPIP=always \
                                    -e CALICO_LIBNETWORK_ENABLED=true \
                                    -e CALICO_NETWORKING_BACKEND=bird \
                                    -e CALICO_DISABLE_FILE_LOGGING=true \
                                    -e FELIX_IPV6SUPPORT=false \
                                    -e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \
                                    -e FELIX_LOGSEVERITYSCREEN=info \
                                    -v /etc/etcd/ssl/etcd-root-ca.pem:/etc/etcd/ssl/etcd-root-ca.pem \
                                    -v /etc/etcd/ssl/etcd.pem:/etc/etcd/ssl/etcd.pem \
                                    -v /etc/etcd/ssl/etcd-key.pem:/etc/etcd/ssl/etcd-key.pem \
                                    -v /var/run/calico:/var/run/calico \
                                    -v /lib/modules:/lib/modules \
                                    -v /run/docker/plugins:/run/docker/plugins \
                                    -v /var/run/docker.sock:/var/run/docker.sock \
                                    -v /var/log/calico:/var/log/calico \
                                    calico/node:v2.6.9
    ExecStop=/usr/bin/docker rm -f calico-node
    Restart=always
    RestartSec=10
    
    [Install]
    WantedBy=multi-user.target
    EOF

    启动calico-node服务

    systemctl daemon-reload
    systemctl start calico-node
    

    2.2.2 编辑calico.yml文件

    下载相关文件

    wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml
    wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml

    修改calico.yml文件

    ## 更改为自己的etcd集群
    sed -i 's@.*etcd_endpoints:.*@  etcd_endpoints: "https://172.16.204.131:2379"@gi' calico.yaml
    
    export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '
    '`
    export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '
    '`
    export ETCD_CA=`cat /etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '
    '`
    
    sed -i "s@.*etcd-cert:.*@  etcd-cert: ${ETCD_CERT}@gi" calico.yaml
    sed -i "s@.*etcd-key:.*@  etcd-key: ${ETCD_KEY}@gi" calico.yaml
    sed -i "s@.*etcd-ca:.*@  etcd-ca: ${ETCD_CA}@gi" calico.yaml
    
    sed -i 's@.*etcd_ca:.*@  etcd_ca: "/calico-secrets/etcd-ca"@gi' calico.yaml
    sed -i 's@.*etcd_cert:.*@  etcd_cert: "/calico-secrets/etcd-cert"@gi' calico.yaml
    sed -i 's@.*etcd_key:.*@  etcd_key: "/calico-secrets/etcd-key"@gi' calico.yaml
    
    ## 禁止kubernetes启动calico-node容器
    sed -i '106,197s@.*@#&@gi' calico.yaml

    2.2.3 修改 kubelet 配置文件

    [root@node01 ~]# cat /etc/kubernetes/kubelet
    ###
    # kubernetes kubelet (minion) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=172.16.204.132"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=172.16.204.132"
    
    # location of the api-server
    # KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
    
    # Add your own!
    # KUBELET_ARGS="--cgroup-driver=systemd"
    KUBELET_ARGS="--cgroup-driver=systemd 
                  --network-plugin=cni 
                  --cni-conf-dir=/etc/cni/net.d 
                  --cni-bin-dir=/opt/cni/bin 
                  --cluster-dns=10.254.0.2 
                  --resolv-conf=/etc/resolv.conf 
                  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig 
                  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig 
                  --fail-swap-on=false 
                  --cert-dir=/etc/kubernetes/ssl 
                  --cluster-domain=cluster.local. 
                  --hairpin-mode=promiscuous-bridge 
                  --serialize-image-pulls=false 
                  --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

    添加如上内容,然后重启服务

    systemctl daemon-reload
    systemctl restart kubelet
    

    2.2.4 启动相关容器

    ## 创建RBAC
    kubectl apply -f rbac.yaml
    ## 启动calico-cni以及kube-controller容器
    kubectl create -f calico.yaml
    

    2.2.5 Calico 网络测试

    创建一个简单demo进行测试

    cat << EOF > demo.deploy.yml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: demo-tomcat
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: demo
        spec:
          containers:
          - name: demo
            image: tomcat:9.0.7
            ports:
            - containerPort: 80
    EOF
    kubectl create -f demo.deploy.yml
    kubetcl get pods -o wide --all-namespaces

    测试

    [root@master01 calico]# kubectl get pods --all-namespaces -o wide
    NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP                NODE
    default       demo-tomcat-56697dcc5b-2jv69               1/1       Running   0          34s       10.20.196.136     192.168.133.129
    default       demo-tomcat-56697dcc5b-lmc2h               1/1       Running   0          35s       10.20.140.74      192.168.133.130
    default       demo-tomcat-56697dcc5b-whbg7               1/1       Running   0          34s       10.20.140.73      192.168.133.130
    kube-system   calico-kube-controllers-684fcf8587-66kxn   1/1       Running   0          43m       192.168.133.129   192.168.133.129
    kube-system   calico-node-hpr9c                          1/1       Running   0          43m       192.168.133.129   192.168.133.129
    kube-system   calico-node-jvpf2                          1/1       Running   0          43m       192.168.133.130   192.168.133.130
    [root@master01 calico]# kubectl exec -it demo-tomcat-56697dcc5b-2jv69 bash
    root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# pin
    pinentry         pinentry-curses  ping             ping6            pinky            
    root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# ping 10.20.140.74
    PING 10.20.140.74 (10.20.140.74): 56 data bytes
    64 bytes from 10.20.140.74: icmp_seq=0 ttl=62 time=0.673 ms
    64 bytes from 10.20.140.74: icmp_seq=1 ttl=62 time=0.398 ms
    ^C--- 10.20.140.74 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 0.398/0.536/0.673/0.138 ms
    root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# ping 10.20.140.73
    PING 10.20.140.73 (10.20.140.73): 56 data bytes
    64 bytes from 10.20.140.73: icmp_seq=0 ttl=62 time=0.844 ms
    64 bytes from 10.20.140.73: icmp_seq=1 ttl=62 time=0.348 ms
    ^C--- 10.20.140.73 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 0.348/0.596/0.844/0.248 ms
    root@demo-tomcat-56697dcc5b-2jv69:/usr/local/tomcat# ping 10.20.196.136
    PING 10.20.196.136 (10.20.196.136): 56 data bytes
    64 bytes from 10.20.196.136: icmp_seq=0 ttl=64 time=0.120 ms
    64 bytes from 10.20.196.136: icmp_seq=1 ttl=64 time=0.068 ms
    ^C--- 10.20.196.136 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 0.068/0.094/0.120/0.026 ms
    

    总结:

    关于 k8s 网络插件的选择,没有什么完整的方案。主要还是根据自己的环境进行决策,主要是 Calico 坑其实比较多。这里提供几个实质性比较强的参考链接:
    https://feisky.gitbooks.io/sdn/basic/tcpip.html#tcpip%E7%BD%91%E7%BB%9C%E6%A8%A1%E5%9E%8B
    http://www.shushilvshe.com/data/kubernete-calico.html#data/kubernete-calico
    http://www.51yimo.com/2017/09/26/calico-install-on-kubernetes/

    三、安装CoreDNS

    3.1 CoreDNS 简介

    没啥说的,其实就是一个取代kube-dns插件的。

    3.2 部署安装

    首先下载 delopy.shcoredns.yaml.sed 文件,然后直接安装

    ./deploy.sh -r 10.254.0.0/16 -i 10.254.0.2 -d cluster.local | kubectl apply -f - 

    提示:关于脚本的内容可能会因为你使用的版本不同而参数不同,所以尽量在做的时候撸一眼脚本的内容。

    [root@master01 coredns]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-684fcf8587-5ndks   1/1       Running   1          11d
    kube-system   calico-node-4wskw                          1/1       Running   1          11d
    kube-system   calico-node-sbngf                          1/1       Running   1          11d
    kube-system   coredns-64b597b598-fmh85                   1/1       Running   0          57s
    kube-system   coredns-64b597b598-jf88d                   1/1       Running   0          57s

    3.3 验证CoreDNS的可用性

    部署测试nginx pod进行测试

    cat > my-nginx.yaml << EOF
     apiVersion: extensions/v1beta1
     kind: Deployment
     metadata:
       name: my-nginx
     spec:
       replicas: 2
       template:
         metadata:
           labels:
             run: my-nginx
         spec:
           containers:
           - name: my-nginx
             image: nginx:1.7.9
             ports:
             - containerPort: 80
    EOF
    
    kubectl create -f my-nginx.yaml

    创建my-nginx pod的service并且查看当前的cluster ip

    ##创建my-nginx pod service
    kubectl expose deploy my-nginx
    
    ##查看创建的service
    [root@master01 ~]# kubectl get services --all-namespaces
    NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    default       kubernetes   ClusterIP   10.254.0.1     <none>        443/TCP         12d
    default       my-nginx     ClusterIP   10.254.37.75   <none>        80/TCP          13s
    kube-system   kube-dns     ClusterIP   10.254.0.2     <none>        53/UDP,53/TCP   4m

    验证CoreDNS可用性

    [root@master01 ~]# kubectl exec -it my-nginx-56b48db847-g8fr2 /bin/bash
    root@my-nginx-56b48db847-g8fr2:/# cat /etc/resolv.conf
    nameserver 10.254.0.2
    search default.svc.cluster.local. svc.cluster.local. cluster.local.
    options ndots:5
    root@my-nginx-56b48db847-g8fr2:/# ping my-nginx
    PING my-nginx.default.svc.cluster.local (10.254.37.75): 48 data bytes
    ^C--- my-nginx.default.svc.cluster.local ping statistics ---
    7 packets transmitted, 0 packets received, 100% packet loss
    root@my-nginx-56b48db847-g8fr2:/# ping kubernetes
    PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes
    ^C--- kubernetes.default.svc.cluster.local ping statistics ---
    5 packets transmitted, 0 packets received, 100% packet loss
    root@my-nginx-56b48db847-g8fr2:/# ping kube-dns.kube-system.svc.cluster.local
    PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 48 data bytes
    ^C--- kube-dns.kube-system.svc.cluster.local ping statistics ---
    6 packets transmitted, 0 packets received, 100% packet loss
    root@my-nginx-56b48db847-g8fr2:/# curl -I my-nginx
    HTTP/1.1 200 OK
    Server: nginx/1.7.9
    Date: Tue, 08 May 2018 07:27:13 GMT
    Content-Type: text/html
    Content-Length: 612
    Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT
    Connection: keep-alive
    ETag: "54999765-264"
    Accept-Ranges: bytes
    
    root@my-nginx-56b48db847-g8fr2:/# curl my-nginx.default.svc.cluster.local
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    body {
     35em;
    margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif;
    }
    
    ...省略其他...

    从上面可以看出,当前是能够解析service对应的cluster ip;

  • 相关阅读:
    java学习第六天
    java学习第五天
    java学习第四天
    java学习第三天
    java学习第二天
    java学习第一天
    性能测试学习第十三天_性能测试报告编写
    性能测试学习第十二天_性能分析
    性能测试学习第十一天_Analysis
    性能测试学习第十天_controller
  • 原文地址:https://www.cnblogs.com/yangxiaoyi/p/8963920.html
Copyright © 2020-2023  润新知