• 理解Kubernetes(1):手工搭建Kubernetes测试环境


    理解Kubernetes系列文章:

    1. 手工搭建环境
    2. 基本概念和操作

      

    1. 基础环境准备

    准备 3个Ubuntu节点,操作系统版本为 16.04,并做好以下配置:

    • 系统升级
    • 设置 /etc/hosts 文件,保持一致
    • 设置从 0 节点上无密码ssh 其它两个节点
    节点名称 IP地址 etcd flanneld K8S docker
    kub-node-0 172.23.100.4 Y Y
    master:
    kubctl
    kube-apiserver
    kuber-controller
    kuber-scheduler
    Y
    kub-node-1 172.23.100.5 Y Y

    node:

    kube-proxy

    kubelet

    Y
    kub-node-2 172.23.100.6 Y Y

    node:

    kube-proxy

    kubelet

    Y

    2. 安装与部署

    2.1 安装 etcd 

    2.1.1 安装 

    在3个节点上运行以下命令来安装 etcd 3.2.5 版本:
    ETCD_VERSION=${ETCD_VERSION:-"3.2.5"}
    ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
    curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
    tar xzf etcd.tar.gz -C /tmp
    mv /tmp/etcd-v${ETCD_VERSION}-linux-amd64 /opt/bin/

    2.1.2 配置 

    在3个节点上做如下配置:
    • 创建目录:
    sudo mkdir -p /var/lib/etcd/
    sudo mkdir -p /opt/config/
    • 创建 /opt/config/etcd.conf 文件:
    ETCD_DATA_DIR=/var/lib/etcd
    ETCD_NAME="kub-node-0"
    ETCD_INITIAL_CLUSTER="kub-node-0=http://172.23.100.4:2380,kub-node-1=http://172.23.100.5:2380,kub-node-2=http://172.23.100.6:2380"
    ETCD_INITIAL_CLUSTER_STATE=new
    ETCD_LISTEN_PEER_URLS=http://172.23.100.4:2380
    ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.23.100.4:2380
    ETCD_ADVERTISE_CLIENT_URLS=http://172.23.100.4:2379
    ETCD_LISTEN_CLIENT_URLS=http://172.23.100.4:2379,http://127.0.0.1:2379

    注意:

    (1)在 0 节点上 etcd cluster 起来后,1 和 2 上的 ETCD_INITIAL_CLUSTER_STATE 值需要改成 existing,表示加入已有集群。否则的话,它自己会创建一个cluster,而不是加入已有cluster。
    (2)在每个节点上,IP 地址需要修改为本机地址。 
    • 创建 /lib/systemd/system/etcd.service 文件:
    [Unit]
    Description=Etcd Server
    Documentation=https://github.com/coreos/etcd
    After=network.target
    [Service]
    User=root
    Type=simple
    EnvironmentFile=-/opt/config/etcd.conf
    ExecStart=/opt/bin/etcd
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=40000
    [Install]
    WantedBy=multi-user.target

    每个节点上都是一样的。 

    • 在三个节点上启动服务:
    systemctl daemon-reload
    systemctl enable etcd
    systemctl start etcd

    2.1.3 测试服务

    • 查看etcd集群状态:
    root@kub-node-2:/home/ubuntu# /opt/bin/etcdctl cluster-health
    member 664b85ff39242fbc is healthy: got healthy result from http://172.23.100.6:2379
    member 9dd263662a4b6f73 is healthy: got healthy result from http://172.23.100.4:2379
    member b17535572fd6a37b is healthy: got healthy result from http://172.23.100.5:2379
    cluster is healthy
    • 查看 etcd 集群成员:
    root@kub-node-0:/home/ubuntu# /opt/bin/etcdctl member list
    9dd263662a4b6f73: name=kub-node-0 peerURLs=http://172.23.100.4:2380 clientURLs=http://172.23.100.4:2379 isLeader=false
    b17535572fd6a37b: name=kub-node-1 peerURLs=http://172.23.100.5:2380 clientURLs=http://172.23.100.5:2379 isLeader=true
    e6db3cac1db23670: name=kub-node-2 peerURLs=http://172.23.100.6:2380 clientURLs=http://172.23.100.6:2379 isLeader=false

    2.2 部署flanneld

    2.2.1 安装 0.8.0 版本

    在每个节点上:

    curl -L https://github.com/coreos/flannel/releases/download/v0.8.0/flannel-v0.8.0-linux-amd64.tar.gz flannel.tar.gz
    tar xzf flannel.tar.gz -C /tmp
    mv /tmp/flanneld /opt/bin/

    2.2.2 配置

    在每个节点上: 
    • 创建 /lib/systemd/system/flanneld.service 文件:
    [Unit]
    Description=Flanneld
    Documentation=https://github.com/coreos/flannel
    After=network.target
    Before=docker.service
    [Service]
    User=root
    ExecStart=/opt/bin/flanneld 
    --etcd-endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" 
    --iface=172.23.100.4 
    --ip-masq
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536

    注意:在每个节点上,iface 设置为本机ip。

    • 在 0 node上,运行 
    /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" mk /coreos.com/network/config  '{"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}'

    确认:

    root@kub-node-0:/home/ubuntu# /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" get /coreos.com/network/config
     {"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}
    • 在三个节点上启动 flannled:
    systemctl daemon-reload
    systemctl enable flanneld
    systemctl start flanneld

    备注:flannel服务需要先于Docker启动。flannel服务启动时主要做了以下几步的工作:

    • 从etcd中获取network的配置信息。
    • 划分subnet,并在etcd中进行注册。
    • 将子网信息记录到/run/flannel/subnet.env中。

    此时,能看到 etcd 中的 subnet:

    root@kub-node-0:/home/ubuntu/kub# /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379"; ls /coreos.com/network/subnets
    /coreos.com/network/subnets/10.1.35.0-24
    /coreos.com/network/subnets/10.1.1.0-24
    /coreos.com/network/subnets/10.1.79.0-24

    2.2.3 验证 

    • 通过运行 service flanneld status 查看其状态。
    • 检查 flannel 虚拟网卡。它们的配置需要和 etcd 中的配置一致。 
    root@kub-node-0:/home/ubuntu/kub# ifconfig flannel.1
    flannel.1 Link encap:Ethernet  HWaddr 22:fc:69:01:33:30
              inet addr:10.1.35.0  Bcast:0.0.0.0  Mask:255.255.255.255
     
    root@kub-node-1:/home/ubuntu# ifconfig flannel.1
    flannel.1 Link encap:Ethernet  HWaddr 0a:6e:a6:6f:95:04
              inet addr:10.1.1.0  Bcast:0.0.0.0  Mask:255.255.255.255
     
    root@kub-node-2:/home/ubuntu# ifconfig flannel.1
    flannel.1 Link encap:Ethernet  HWaddr 6e:10:b3:53:1e:f4
              inet addr:10.1.79.0  Bcast:0.0.0.0  Mask:255.255.255.255

    2.3 部署 Docker

    2.3.1 安装

     参考 https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce-1,在每个节点上运行以下命令来安装Docker: 
       sudo apt-get update
       sudo apt-get install     apt-transport-https     ca-certificates     curl     software-properties-common
       curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
       sudo apt-key fingerprint 0EBFCD88
       sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu 
       $(lsb_release -cs) 
       stable"
       sudo apt-get update
       sudo apt-get install docker-ce

    2.3.2 验证

    创建并运行 hello-world 容器:

    root@kub-node-0:/home/ubuntu/kub# docker run hello-world
    Unable to find image 'hello-world:latest' locally
    latest: Pulling from library/hello-world
    ca4f61b1923c: Pull complete
    Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
    Status: Downloaded newer image for hello-world:latest
     
    Hello from Docker!
    This message shows that your installation appears to be working correctly.

    2.3.3 配置

     在每个节点上: 
    • 进入 /tmp 目录,运行 cp mk-docker-opts.sh /usr/bin/ 拷贝文件 
    • 执行下面的命令
    root@kub-node-0:/home/ubuntu/kub# mk-docker-opts.sh -i
    root@kub-node-0:/home/ubuntu/kub# source /run/flannel/subnet.env
    root@kub-node-0:/home/ubuntu/kub# ifconfig docker0
    docker0   Link encap:Ethernet  HWaddr 02:42:bc:71:d0:22
              inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
              inet6 addr: fe80::42:bcff:fe71:d022/64 Scope:Link
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)
     
    root@kub-node-0:/home/ubuntu/kub# ifconfig docker0 ${FLANNEL_SUBNET}
    root@kub-node-0:/home/ubuntu/kub# ifconfig docker0
    docker0   Link encap:Ethernet  HWaddr 02:42:bc:71:d0:22
              inet addr:10.1.35.1  Bcast:10.1.35.255  Mask:255.255.255.0
              inet6 addr: fe80::42:bcff:fe71:d022/64 Scope:Link
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)
    • 修改 /lib/systemd/system/docker.service 文件为:
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    EnvironmentFile=/var/run/flannel/subnet.env
    ExecStart=/usr/bin/dockerd -g /data/docker  --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
    ExecReload=/bin/kill -s HUP $MAINPID
    #ExecStart=/usr/bin/dockerd -H fd://
    #ExecReload=/bin/kill -s HUP $MAINPID
    • 放开 iptables 规则
    iptables -F
    iptables -X
    iptables -Z
    iptables -P INPUT ACCEPT
    iptables -P OUTPUT ACCEPT
    iptables -P FORWARD ACCEPT
    iptables-save
    • 重启 docker 服务
    systemctl daemon-reload
    systemctl enable docker
    systemctl restart docker
    • 验证
    在三个节点上,运行命令 docker run -it ubuntu bash 启动一个 ubuntu 容器,其ip 分别为 10.1.35.2,10.1.79.2,10.1.1.2。互相ping,可通。  

    2.4 证书创建与配置

    2.4.1 0 节点上的配置

    • 在 0 节点上,创建 master_ssl.cnf 文件:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name
    [req_distinguished_name]
    [ v3_req ]
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName = @alt_names
    [alt_names]
    DNS.1 = kubernetes
    DNS.2 = kubernetes.default
    DNS.3 = kubernetes.default.svc
    DNS.4 = kubernetes.default.svc.cluster.local
    DNS.5 = master
    IP.1 = 192.1.0.1
    IP.2 = 172.23.100.4
    • 生成 master 证书:
    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -key ca.key -subj "/CN=company.com" -days 10000 -out ca.crt
    openssl genrsa -out server.key 2048
    openssl req -new -key server.key -subj "/CN=master" -config master_ssl.cnf -out server.csr
    openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req -extfile master_ssl.cnf -out server.crt
    openssl genrsa -out client.key 2048
    openssl req -new -key client.key -subj "/CN=node" -out client.csr
    openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
    • 将生成的文件拷贝至  /root/key 文件夹
    root@kub-node-0:/home/ubuntu/kub# ls /root/key
    ca.crt  ca.key  client.crt  client.key  server.crt  server.key
    • 将 ca.crt 文件和 ca.key 文件拷贝到各个node节点上的 /home/ubuntu/kub 文件夹中。

    2.4.2 在 1 和 2 节点上的配置

    在 1 和 2 上分别执行下面的操作。下面的示例以节点2为例,1上需要修改IP地址。

    • 运行:
    CLINET_IP=172.23.100.6
    openssl genrsa -out client.key 2048
    openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr
    openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
    • 结果:
    root@kub-node-2:/home/ubuntu/kub# ls -lt
    total 8908
    -rw-r--r-- 1 root root     985 Dec 31 20:57 client.crt
    -rw-r--r-- 1 root root      17 Dec 31 20:57 ca.srl
    -rw-r--r-- 1 root root     895 Dec 31 20:57 client.csr
    -rw-r--r-- 1 root root    1675 Dec 31 20:57 client.key
    -rw-r--r-- 1 root root    1099 Dec 31 20:54 ca.crt
    -rw-r--r-- 1 root root    1675 Dec 31 20:54 ca.key
    • 将 client 和 ca 的 .crt 和 .key 拷贝至 /root/key 文件夹。此时其中有4个文件:
    root@kub-node-2:/home/ubuntu# ls /root/key
    ca.crt  ca.key  client.crt  client.key
    • 创建 /etc/kubernetes/kubeconfig 文件
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: /root/key/ca.crt
        server: https://172.23.100.4:6443
      name: ubuntu
    contexts:
    - context:
        cluster: ubuntu
        user: ubuntu
      name: ubuntu
    current-context: ubuntu
    kind: Config
    preferences: {}
    users:
    - name: ubuntu
      user:
        client-certificate: /root/key/client.crt
        client-key: /root/key/client.key

     2.5 Kubernetes master 节点配置

    在 0 节点上做如下操作。

    2.5.1 安装Kubernetes 1.8.5 版本

    curl -L https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gz kuber.tar.gz
    tar xzf kuber.tar.gz -C /tmp3
    mv /tmp3/kubernetes/server/bin/* /opt/bin

    2.5.2 配置服务

    • 创建 /lib/systemd/system/kube-apiserver.service 文件
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    [Service]
    User=root
    ExecStart=/opt/bin/kube-apiserver 
    --secure-port=6443 
    --etcd-servers=http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.6:2379 
    --logtostderr=false 
    --log-dir=/var/log/kubernetes 
    --allow-privileged=false 
    --service-cluster-ip-range=192.1.0.0/16 
    --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota 
    --service-node-port-range=30000-32767 
    --advertise-address=172.23.100.4 
    --client-ca-file=/root/key/ca.crt 
    --tls-cert-file=/root/key/server.crt 
    --tls-private-key-file=/root/key/server.key
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    • 创建 /lib/systemd/system/kube-controller-manager.service 文件
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    User=root
    ExecStart=/opt/bin/kube-controller-manager 
    --master=https://172.23.100.4:6443 
    --root-ca-file=/root/key/ca.crt 
    --service-account-private-key-file=/root/key/server.key 
    --kubeconfig=/etc/kubernetes/kubeconfig 
    --logtostderr=false 
    --log-dir=/var/log/kubernetes
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    • 创建 /lib/systemd/system/kube-scheduler.service 文件
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    User=root
    ExecStart=/opt/bin/kube-scheduler 
    --logtostderr=false 
    --log-dir=/var/log/kubernetes 
    --master=https://172.23.100.4:6443 
    --kubeconfig=/etc/kubernetes/kubeconfig
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    • 启动服务
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl enable kube-controller-manager
    systemctl enable kube-scheduler
    systemctl enable flanneld
    systemctl start kube-apiserver
    systemctl start kube-controller-manager
    systemctl start kube-scheduler
    • 确认各服务状态
    systemctl status kube-apiserver
    systemctl status kube-controller-manager
    systemctl status kube-scheduler 

    2.6 配置 kubectl

    在 0 节点上,创建 /root/.kube/config 文件:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: /root/key/ca.crt
      name: ubuntu
    contexts:
    - context:
        cluster: ubuntu
        user: ubuntu
      name: ubuntu
    current-context: ubuntu
    kind: Config
    preferences: {}
    users:
    - name: ubuntu
      user:
        client-certificate: /root/key/client.crt
        client-key: /root/key/client.key 

    2.7 Kubernetes node 节点配置

    节点1 和  2 为 K8S node 节点。在它们上执行下面的操作。

    2.7.1 安装

    同 2.4.1 。

    2.7.2 配置

    • 在 1 和 2 分别执行操作。下面的内容为1上的,2上的需要将 127.23.100.5 修改为 127.23.100.6 地址。 
    • 创建 /lib/systemd/system/kubelet.service 文件
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    [Service]
    ExecStart=/opt/bin/kubelet 
    --hostname-override=172.23.100.5 
    --pod-infra-container-image="docker.io/kubernetes/pause" 
    --cluster-domain=cluster.local 
    --log-dir=/var/log/kubernetes 
    --cluster-dns=192.1.0.100 
    --kubeconfig=/etc/kubernetes/kubeconfig 
    --logtostderr=false
    Restart=on-failure
    KillMode=process
    [Install]
    WantedBy=multi-user.target
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    • 创建 /lib/systemd/system/kube-proxy.service 文件
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    [Service]
    ExecStart=/opt/bin/kube-proxy 
    --hostname-override=172.23.100.5 
    --master=https://172.23.100.4:6443 
    --log-dir=/var/log/kubernetes 
    --kubeconfig=/etc/kubernetes/kubeconfig 
    --logtostderr=false
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    • 启动服务
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl enable kube-proxy
    systemctl start kubelet
    systemctl start kube-proxy
    • 确认各组件的运行状态
    systemctl status kubelet
    systemctl status kube-proxy

    3. 验证

    3.1 获取集群信息

    在节点 0 上运行以下命令。

    • 获取 master 节点
    root@kub-node-0:/home/ubuntu/kub# kubectl cluster-info
    Kubernetes master is running at http://localhost:8080
    • 查看node 节点
    root@kub-node-0:/home/ubuntu/kub# kubectl get nodes
    NAME           STATUS    ROLES     AGE       VERSION
    172.23.100.5   Ready     <none>    2d        v1.8.5
    172.23.100.6   Ready     <none>    2d        v1.8.5

    3.2 部署第一个应用

    • 创建 nginx.yml 文件
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: my-nginx4
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            app: my-nginx4
        spec:
          containers:
          - name: my-nginx4
            image: nginx
            ports:
            - containerPort: 80
    • 创建一个deployment
    root@kub-node-0:/home/ubuntu/kub# kubectl create -f nginx4.yml
    deployment "my-nginx4" created
    • 查看状态
    root@kub-node-0:/home/ubuntu/kub# kubectl get all
    NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deploy/my-nginx4   2         2         2            2           3m
    
    NAME                      DESIRED   CURRENT   READY     AGE
    rs/my-nginx4-75bbfccc7c   2         2         2         3m
    
    NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deploy/my-nginx4   2         2         2            2           3m
    
    NAME                      DESIRED   CURRENT   READY     AGE
    rs/my-nginx4-75bbfccc7c   2         2         2         3m
    
    NAME                            READY     STATUS    RESTARTS   AGE
    po/my-nginx4-75bbfccc7c-5frpl   1/1       Running   0          3m
    po/my-nginx4-75bbfccc7c-5kr4j   1/1       Running   0          3m
    
    NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    svc/kubernetes   ClusterIP   192.1.0.1    <none>        443/TCP   2d
    • 查看该部署的详细信息
    root@kub-node-0:/home/ubuntu/kub# kubectl describe deployments my-nginx4
    Name:                   my-nginx4
    Namespace:              default
    CreationTimestamp:      Wed, 03 Jan 2018 09:16:44 +0800
    Labels:                 app=my-nginx4
    Annotations:            deployment.kubernetes.io/revision=1
    Selector:               app=my-nginx4
    Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  1 max unavailable, 1 max surge
    Pod Template:
      Labels:  app=my-nginx4
      Containers:
       my-nginx4:
        Image:        nginx
        Port:         80/TCP
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   my-nginx4-75bbfccc7c (2/2 replicas created)
    Events:
      Type    Reason             Age   From                   Message
      ----    ------             ----  ----                   -------
      Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set my-nginx4-75bbfccc7c to 2
    • 查看 pod 的详细信息能看到它的容器、IP地址和所在的node节点
    root@kub-node-0:/home/ubuntu/kub# kubectl describe pod my-nginx4-75bbfccc7c-5frpl
    Name:           my-nginx4-75bbfccc7c-5frpl
    Namespace:      default
    Node:           172.23.100.5/172.23.100.5
    Start Time:     Wed, 03 Jan 2018 09:16:45 +0800
    Labels:         app=my-nginx4
                    pod-template-hash=3166977737
    Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"my-nginx4-75bbfccc7c","uid":"c2d83729-f023-11e7-a605-fa163e9a22a...
    Status:         Running
    IP:             10.1.1.3
    Created By:     ReplicaSet/my-nginx4-75bbfccc7c
    Controlled By:  ReplicaSet/my-nginx4-75bbfccc7c
    Containers:
      my-nginx4:
        Container ID:   docker://4a994121e309fb81181e22589982bf8c053287616ba7c92dcddc5e7fb49927b1
        Image:          nginx
        Image ID:       docker-pullable://nginx@sha256:cf8d5726fc897486a4f628d3b93483e3f391a76ea4897de0500ef1f9abcd69a1
        Port:           80/TCP
        State:          Running
          Started:      Wed, 03 Jan 2018 09:16:53 +0800
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-b2p4z (ro)
    Conditions:
      Type           Status
      Initialized    True
      Ready          True
      PodScheduled   True
    Volumes:
      default-token-b2p4z:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-b2p4z
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     <none>
    Events:
      Type    Reason                 Age   From                   Message
      ----    ------                 ----  ----                   -------
      Normal  Scheduled              5m    default-scheduler      Successfully assigned my-nginx4-75bbfccc7c-5frpl to 172.23.100.5
      Normal  SuccessfulMountVolume  5m    kubelet, 172.23.100.5  MountVolume.SetUp succeeded for volume "default-token-b2p4z"
      Normal  Pulling                5m    kubelet, 172.23.100.5  pulling image "nginx"
      Normal  Pulled                 5m    kubelet, 172.23.100.5  Successfully pulled image "nginx"
      Normal  Created                5m    kubelet, 172.23.100.5  Created container
      Normal  Started                5m    kubelet, 172.23.100.5  Started container
    • 在节点 1 上能看到该pod包含的容器。其中 pause 容器比较特殊,是一个 K8S 基础设施类容器。
    root@kub-node-1:/home/ubuntu# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
    4a994121e309        nginx               "nginx -g 'daemon of…"   2 minutes ago       Up 2 minutes                            k8s_my-nginx4_my-nginx4-75bbfccc7c-5frpl_default_c35b9521-f023-11e7-a605-fa163e9a22a6_0
    e3f39d708800        kubernetes/pause    "/pause"                 2 minutes ago       Up 2 minutes                            k8s_POD_my-nginx4-75bbfccc7c-5frpl_default_c35b9521-f023-11e7-a605-fa163e9a22a6_0
    • 创建一个 NodePort 来访问该应用
    root@kub-node-0:/home/ubuntu/kub# kubectl expose deployment my-nginx4 --type=NodePort --name=nginx-nodeport
    service "nginx-nodeport" exposed
    • 看到通过 node IP 访问的端口为 31362
    root@kub-node-0:/home/ubuntu/kub# kubectl get svc
    NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    kubernetes       ClusterIP   192.1.0.1       <none>        443/TCP        2d
    nginx-nodeport   NodePort    192.1.216.223   <none>        80:31362/TCP   31s
    • 通过 <node-ip>:<node-port> 访问 ngnix
    root@kub-node-0:/home/ubuntu/kub# curl http://172.23.100.5:31362
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
             35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>

     4.踩过的一些坑

    • K8S 1.7.2 版本无法创建 pod。kublet 一直报下面的错误。原因是因为该版本有bug,切换至 1.8.5 正常。
    W0101 20:25:25.636397   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
    W0101 20:25:35.680877   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
    W0101 20:25:45.728875   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
    W0101 20:25:55.756455   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
    • 看不到 k8s 服务的日志。fix 方法为在各服务的配置文件中,设置  logtostderr = false 以及添加 log-dir 并手动创建该 dir。
    • 使用 hello-world 容器部署一个应用,pod 状态一直在 CrashLoopBackOff。其原因是因为该容器是启动即退出的,因此K8S会不停地启动pod。
    root@kub-node-0:/home/ubuntu# kubectl get pods
    NAME                          READY     STATUS             RESTARTS   AGE
    hello-world-5c9bd8867-76jjg   0/1       CrashLoopBackOff   7          12m
    hello-world-5c9bd8867-92275   0/1       CrashLoopBackOff   7          12m
    hello-world-5c9bd8867-cn75n   0/1       CrashLoopBackOff   7          12m
    • 创建第一个部署失败,pod 状态一直停留在 ContainerCreating。kubelet 日志如下。原因是因为 kubelet 要去 gcr.io 上拉取 pause 镜像,而这个站点被墙了。fix方法为在 kueblet service 配置文件中使用  --pod-infra-container-image="docker.io/kubernetes/pause”。原因分析在这里
    E0101 22:34:51.908652   29137 kuberuntime_manager.go:633] createPodSandbox for pod "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    E0101 22:34:51.908755   29137 pod_workers.go:182] Error syncing pod aedfbe1b-eefc-11e7-b10d-fa163e9a22a6 ("my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)"), skipping: failed to "CreatePodSandbox" for "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" with CreatePodSandboxError: "CreatePodSandbox for pod "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)”
    • 创建 LoadBalancer 类型的 service 后,该 service 的 EXTERNAL-IP 一直停留在 Pending。其原因是因为该功能需要云平台上支持。改为 NodePort 就正常了。
    • 部署 nginx 应用和 NodePort service 后,无法通过 service 的 nodeport 访问。原因是因为 yml 文件中 containerport 写错了,ngnix 使用的端口为 80. 修改后重新部署问题消除。
     
     
     
    参考文章:
  • 相关阅读:
    luaPlus
    falagard cegui
    cegui 的透明纹理
    msvcprt.lib(MSVCP90.dll) : error LNK2005:已经在libcpmtd.lib(xmutex.obj) 中定义
    CEGUI
    SameText
    操作 Wave 文件(15): 合并与剪裁 wav 文件
    Delphi 的编码与解码(或叫加密与解密)函数
    操作 Wave 文件(13): waveOutGetVolume、waveOutSetVolume
    操作 Wave 文件(12): 使用 waveOut...重复播放 wav 文件
  • 原文地址:https://www.cnblogs.com/sammyliu/p/8182078.html
Copyright © 2020-2023  润新知