• Ubuntu 16.04下搭建kubernetes集群环境


    简介

    目前Kubernetes为Ubuntu提供的kube-up脚本,不支持15.10以及16.04这两个使用systemd作为init系统的版本。

    这里详细介绍一下如何以非Docker方式在Ubuntu16.04集群上手动安装部署Kubernetes的过程。

    手动的部署过程,可以很容易写成自动部署的脚本。同时了解整个部署过程,对深入理解Kubernetes的架构及各功能模块也会很有帮助。

    环境信息

    版本信息

    组件 版本
    etcd 2.3.1
    Flannel 0.5.5
    Kubernetes 1.3.4

     

    主机信息

    主机 IP OS
    k8s-master 172.16.203.133 Ubuntu 16.04
    k8s-node01 172.16.203.134 Ubuntu 16.04
    k8s-node02 172.16.203.135 Ubuntu 16.04

    安装Docker

    每台主机上安装最新版Docker Engine(目前是1.12) - https://docs.docker.com/engine/installation/linux/ubuntulinux/

    部署etcd集群

    我们将在3台主机上安装部署etcd集群

    下载etcd

    在部署机上下载etcd

    复制代码
    ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
    ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
    curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz

    tar xzf etcd.tar.gz -C /tmp
    cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64

    for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done
    for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done
    复制代码

     配置etcd服务

    在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)

    /opt/config/etcd.conf

    复制代码
    sudo mkdir -p /var/lib/etcd/
    sudo mkdir -p /opt/config/
    sudo  cat <<EOF |  sudo tee /opt/config/etcd.conf
    ETCD_DATA_DIR=/var/lib/etcd
    ETCD_NAME=$(hostname)
    ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380,node01=http://172.16.203.134:2380,node02=http://172.16.203.135:2380
    ETCD_INITIAL_CLUSTER_STATE=new
    ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380
    ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380
    ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379
    ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379
    GOMAXPROCS=$(nproc)
    EOF
    复制代码

     /lib/systemd/system/etcd.service

    复制代码
    [Unit]
    Description=Etcd Server
    Documentation=https://github.com/coreos/etcd
    After=network.target
    
    
    [Service]
    User=root
    Type=simple
    EnvironmentFile=-/opt/config/etcd.conf
    ExecStart=/opt/bin/etcd
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=40000
    
    [Install]
    WantedBy=multi-user.target
    复制代码

     然后在每台主机上运行

    sudo systemctl daemon-reload 
    sudo systemctl enable etcd
    sudo systemctl start etcd

    下载Flannel

    FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
    curl -L  https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
    tar xzf  flannel.tar.gz -C /tmp

    编译K8s

    在部署机上编译K8s,需要安装docker engine(1.12)和go(1.6.2)

    git clone https://github.com/kubernetes/kubernetes.git
    cd kubernetes
    make release-skip-tests
    tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp

    Note

    除了linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS, KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平台注释掉。 

    部署K8s Master

    复制程序文件

    cd /tmp
    scp kubernetes/server/bin/kube-apiserver 
         kubernetes/server/bin/kube-controller-manager 
         kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@172.16.203.133:~/kube
    scp flannel-${FLANNEL_VERSION}/flanneld user@172.16.203.133:~/kube
    ssh -t user@172.16.203.133 'sudo mv ~/kube/* /opt/bin/'

    创建证书

    在master主机上 ,运行如下命令创建证书

    复制代码
    mkdir -p /srv/kubernetes/
    
    cd /srv/kubernetes
    
    export MASTER_IP=172.16.203.133
    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
    openssl genrsa -out server.key 2048
    openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
    openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 
    复制代码

    配置kube-apiserver服务

    我们使用如下的Service以及Flannel的网段:

    SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16

    FLANNEL_NET=192.168.0.0/16

    在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下

    复制代码
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    User=root
    ExecStart=/opt/bin/kube-apiserver 
     --insecure-bind-address=0.0.0.0 
     --insecure-port=8080 
     --etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379 
     --logtostderr=true 
    --allow-privileged=false --service-cluster-ip-range=172.18.0.0/16 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota --service-node-port-range=30000-32767 --advertise-address=172.16.203.133 --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.crt --tls-private-key-file=/srv/kubernetes/server.key Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
    复制代码

    配置kube-controller-manager服务

    在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下

    复制代码
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    User=root
    ExecStart=/opt/bin/kube-controller-manager 
      --master=127.0.0.1:8080 
      --root-ca-file=/srv/kubernetes/ca.crt 
      --service-account-private-key-file=/srv/kubernetes/server.key 
      --logtostderr=true
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    复制代码

    配置kuber-scheduler服务

    在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下

    复制代码
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    User=root
    ExecStart=/opt/bin/kube-scheduler 
      --logtostderr=true 
      --master=127.0.0.1:8080
    Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
    复制代码

    配置flanneld服务

    在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下

    复制代码
    [Unit]
    Description=Flanneld
    Documentation=https://github.com/coreos/flannel
    After=network.target
    Before=docker.service
    
    [Service]
    User=root
    ExecStart=/opt/bin/flanneld 
      --etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" 
      --iface=172.16.203.133 
      --ip-masq
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    复制代码

    启动服务

    复制代码
    /opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config  
      '{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'
    
    sudo systemctl daemon-reload
    sudo systemctl enable kube-apiserver sudo systemctl enable kube-controller-manager sudo systemctl enable kube-scheduler sudo systemctl enable flanneld sudo systemctl start kube-apiserver sudo systemctl start kube-controller-manager sudo systemctl start kube-scheduler sudo systemctl start flanneld
    复制代码

    修改Docker服务 

    复制代码
    source /run/flannel/subnet.env
    
    sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
    rc=0
    ip link show docker0 >/dev/null 2>&1 || rc="$?"
    if [[ "$rc" -eq "0" ]]; then
    ip link set dev docker0 down
    ip link delete docker0
    fi
    
    
    sudo systemctl daemon-reload
    sudo systemctl enable docker
    sudo systemctl restart docker
    复制代码

    部署K8s Node

    复制程序文件

    cd /tmp
    for h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done
    for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done
    for h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done

    配置Flanned以及修改Docker服务

    参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址

    配置kubelet服务

    /lib/systemd/system/kubelet.service,注意修改IP地址

    复制代码
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    ExecStart=/opt/bin/kubelet 
      --hostname-override=172.16.203.133 
      --api-servers=http://172.16.203.133:8080 
      --logtostderr=true
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    复制代码

    启动服务

    sudo systemctl daemon-reload
    sudo systemctl enable kubelet
    sudo systemctl start kubelet

    配置kube-proxy服务

    /lib/systemd/system/kube-proxy.service,注意修改IP地址

    复制代码
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    ExecStart=/opt/bin/kube-proxy  
      --hostname-override=172.16.203.133 
      --master=http://172.16.203.133:8080 
      --logtostderr=true
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    复制代码

    启动服务

    sudo systemctl daemon-reload
    sudo systemctl enable kube-proxy
    sudo systemctl start kube-proxy

    配置验证K8s

    生成配置文件

    在部署机上运行

    复制代码
    KUBE_USER=admin
    KUBE_PASSWORD=$(python -c 'import string,random; print("".join(random.SystemRandom().choice(string.ascii_letters + string.digits) for _ in range(16)))')
    
    DEFAULT_KUBECONFIG="${HOME}/.kube/config"
    mkdir -p $(dirname "${KUBECONFIG}")
    touch "${KUBECONFIG}"
    CONTEXT=ubuntu
    KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}
    
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=http://172.16.203.133:8080 --insecure-skip-tls-verify=true
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --username=${KUBE_USER} --password=${KUBE_PASSWORD}
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}" 
    复制代码

    验证

    复制代码
    $ kubectl get nodes
    NAME             STATUS    AGE
    172.16.203.133   Ready     2h
    172.16.203.134   Ready     2h
    172.16.203.135   Ready     2h
    
    $cat <<EOF > nginx.yml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: my-nginx
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            run: my-nginx
        spec:
          containers:
          - name: my-nginx
            image: nginx
            ports:
            - containerPort: 80
    EOF
    
    $kubectl create -f nginx.yml
    $kubectl get pods -l run=my-nginx -o wide
    NAME                        READY     STATUS    RESTARTS   AGE       IP             NODE
    my-nginx-1636613490-9ibg1   1/1       Running   0          13m       192.168.31.2   172.16.203.134
    my-nginx-1636613490-erx98   1/1       Running   0          13m       192.168.56.3   172.16.203.133
    
    $kubectl expose deployment/my-nginx
    
    $kubectl get service my-nginx
    NAME       CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
    my-nginx   172.18.28.48   <none>        80/TCP    37s
    复制代码

    在三台主机上访问pod或者service的IP地址,都可以访问到nginx服务

    复制代码
    $ curl http://172.18.28.48
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
             35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html> 
    复制代码

    用户认证和安全

    我们在最后一步生成kube配置文件的时候,创建了用户名和密码,但并没在apiserver上启用(使用--basic-auth-file参数),也就是说,只要能访问到172.16.203.133:8080,就可以操作k8s集群。如果是内部系统,并且配置好访问规则,也是可以接受的

    为了增强安全性,可以启用证书认证,有两种方式:同时启用minion和客户端与master之间的认证,或者只启用客户端与master之间的证书认证。

    minion节点的证书生成和配置可以参考http://kubernetes.io/docs/getting-started-guides/scratch/#security-models以及http://kubernetes.io/docs/getting-started-guides/ubuntu-calico/的相关部分。

    这里我们看一下如何启用客户端与master之间的证书认证。使用这种方式也相对安全,minion节点和master一般在同一个数据中心,可以把对HTTP 8080的访问限制在数据中心内部,而客户端只能使用证书通过HTTPS访问api server。

    创建客户端证书

    在master主机上运行如下命令

    cd /srv/kubernetes
    
    export CLINET_IP=172.16.203.1
    openssl genrsa -out client.key 2048
    openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr
    openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000

    把client.crt和client.key复制到部署机,然后运如下命令,生成kube配置文件

    复制代码
    DEFAULT_KUBECONFIG="${HOME}/.kube/config"
    mkdir -p $(dirname "${KUBECONFIG}")
    touch "${KUBECONFIG}"
    CONTEXT=ubuntu
    KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}
    KUBE_CERT=client.crt
    KUBE_KEY=client.key
    
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=https://172.16.203.133:6443 --insecure-skip-tls-verify=true
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --client-certificate=${KUBE_CERT} --client-key=${KUBE_KEY} --embed-certs=true
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
    KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}"
    复制代码

    部署附件组件

    部署DNS

    复制代码
    DNS_SERVER_IP="172.18.8.8"
    DNS_DOMAIN="cluster.local"
    DNS_REPLICAS=1
    KUBE_APISERVER_URL=http://172.16.203.133:8080
    
    cat <<EOF > skydns.yml
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: kube-dns-v17.1
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        version: v17.1
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: $DNS_REPLICAS
      selector:
        k8s-app: kube-dns
        version: v17.1
      template:
        metadata:
          labels:
            k8s-app: kube-dns
            version: v17.1
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kubedns
            image: gcr.io/google_containers/kubedns-amd64:1.5
            resources:
              # TODO: Set memory limits when we've profiled the container for large
              # clusters, then set request = limit to keep this container in
              # guaranteed class. Currently, this container falls into the
              # "burstable" category so the kubelet doesn't backoff from restarting it.
              limits:
                cpu: 100m
                memory: 170Mi
              requests:
                cpu: 100m
                memory: 70Mi
            livenessProbe:
              httpGet:
                path: /healthz
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            readinessProbe:
              httpGet:
                path: /readiness
                port: 8081
                scheme: HTTP
              # we poll on pod startup for the Kubernetes master service and
              # only setup the /readiness HTTP server once that's available.
              initialDelaySeconds: 30
              timeoutSeconds: 5
            args:
            # command = "/kube-dns"
            - --domain=$DNS_DOMAIN.
            - --dns-port=10053
            - --kube-master-url=$KUBE_APISERVER_URL
            ports:
            - containerPort: 10053
              name: dns-local
              protocol: UDP
            - containerPort: 10053
              name: dns-tcp-local
              protocol: TCP
          - name: dnsmasq
            image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
            args:
            - --cache-size=1000
            - --no-resolv
            - --server=127.0.0.1#10053
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
          - name: healthz
            image: gcr.io/google_containers/exechealthz-amd64:1.1
            resources:
              # keep request = limit to keep this container in guaranteed class
              limits:
                cpu: 10m
                memory: 50Mi
              requests:
                cpu: 10m
                # Note that this container shouldn't really need 50Mi of memory. The
                # limits are set higher than expected pending investigation on #29688.
                # The extra memory was stolen from the kubedns container to keep the
                # net memory requested by the pod constant.
                memory: 50Mi
            args:
            - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null
            - -port=8080
            - -quiet
            ports:
            - containerPort: 8080
              protocol: TCP
          dnsPolicy: Default  # Don't use cluster DNS.
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "KubeDNS"
    spec:
      selector:
        k8s-app: kube-dns
      clusterIP: $DNS_SERVER_IP
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
    EOF
    
    kubectl create -f skydns.yml
    复制代码

     然后,修该各节点的kubelet.service,添加--cluster-dns=172.18.8.8以及--cluster-domain=cluster.local

    部署Dashboard

    复制代码
    echo <<'EOF' > kube-dashboard.yml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      labels:
        app: kubernetes-dashboard
        version: v1.1.0
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: kubernetes-dashboard
      template:
        metadata:
          labels:
            app: kubernetes-dashboard
        spec:
          containers:
          - name: kubernetes-dashboard
            image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
            imagePullPolicy: Always
            ports:
            - containerPort: 9090
              protocol: TCP
            args:
            - --apiserver-host=http://172.16.203.133:8080
            livenessProbe:
              httpGet:
                path: /
                port: 9090
              initialDelaySeconds: 30
              timeoutSeconds: 30
    ---
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      type: ClusterIP
      ports:
      - port: 80
        targetPort: 9090
      selector:
        app: kubernetes-dashboard
    EOF
    
    kubectl create -f kube-dashboard.yml
    复制代码
  • 相关阅读:
    How to Set up Cplex Dev Environment under Linux
    矿大linux下拨号上网的一个方法——利用NetworkManager
    4月4日在写协议分析器中遇到的问题:对类指针的直接赋值中遇到的问题
    新篇章的开始
    纠结N久,还是开通了博客园,希望跟大家多多交流吧
    SQL去重distinct方法解析
    接口测试基础知识介绍
    Appium是什么
    appium环境配置
    appium操作
  • 原文地址:https://www.cnblogs.com/ilinuxer/p/6368416.html
Copyright © 2020-2023  润新知