• kubernetes多节点部署的决心


    注:以下操作均基于centos7系统。

    安装ansible

    ansilbe能够通过yum或者pip安装,因为kubernetes-ansible用到了密码。故而还须要安装sshpass:

    pip install ansible
    wget http://sourceforge.net/projects/sshpass/files/latest/download
    tar zxvf download
    cd sshpass-1.05
    ./configure && make && make install
    

    配置kubernetes-ansible

    # git clone https://github.com/eparis/kubernetes-ansible.git
    # cd kubernetes-ansible
    
    # #在group_vars/all.yml中配置用户为root
    # cat group_vars/all.yml | grep ssh
    ansible_ssh_user: root
    
    # # Each kubernetes service gets its own IP address. These are not real IPs. 
    # # You need only select a range of IPs which are not in use elsewhere in your
    # # environment. This must be done even if you do not use the network setup 
    # # provided by the ansible scripts.
    # cat group_vars/all.yml | grep kube_service_addresses
    kube_service_addresses: 10.254.0.0/16
    
    # #配置root密码
    # echo "password" > ~/rootpassword
    

    配置master、etcd和minion的IP地址:

    # cat inventory
    [masters]
    192.168.0.7
    
    [etcd]
    192.168.0.7
    
    [minions]
    # kube_ip_addr为该minion上Pods的地址池,默觉得/24掩码
    192.168.0.3  kube_ip_addr=10.0.1.1 
    192.168.0.6  kube_ip_addr=10.0.2.1
    

    測试各机器连接并配置ssh key:

    # ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息。可忽略
    # ansible-playbook -i inventory keys.yml
    

    眼下kubernetes-ansible对依赖处理的还不是非常全面,须要先手动配置下:

    # # 安装iptables
    # ansible all -i inventory --vault-password-file=~/rootpassword -a 'yum -y install iptables-services'
    # # 为CentOS 7加入kubernetes源
    # ansible all -i inventory --vault-password-file=~/rootpassword -a 'curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo'
    # # 配置ssh,防止ssh连接超时
    # sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config 
    # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config'
    # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config'
    # ansible all -i inventory --vault-password-file=~/rootpassword -a 'systemctl restart sshd'
    

    配置docker网络,实际上就是创建kbr0网桥、为网桥配置ip并配置路由:

    # ansible-playbook -i inventory hack-network.yml  
    
    PLAY [minions] **************************************************************** 
    
    GATHERING FACTS *************************************************************** 
    ok: [192.168.0.6]
    ok: [192.168.0.3]
    
    TASK: [network-hack-bridge | Create kubernetes bridge interface] ************** 
    changed: [192.168.0.3]
    changed: [192.168.0.6]
    
    TASK: [network-hack-bridge | Configure docker to use the bridge inferface] **** 
    changed: [192.168.0.6]
    changed: [192.168.0.3]
    
    PLAY [minions] **************************************************************** 
    
    GATHERING FACTS *************************************************************** 
    ok: [192.168.0.6]
    ok: [192.168.0.3]
    
    TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] *** 
    ok: [192.168.0.6]
    ok: [192.168.0.3]
    
    TASK: [network-hack-routes | Set up a network config file] ******************** 
    skipping: [192.168.0.3]
    skipping: [192.168.0.6]
    
    TASK: [network-hack-routes | Set up a static routing table] ******************* 
    changed: [192.168.0.3]
    changed: [192.168.0.6]
    
    NOTIFIED: [network-hack-routes | apply changes] ******************************* 
    changed: [192.168.0.6]
    changed: [192.168.0.3]
    
    NOTIFIED: [network-hack-routes | upload script] ******************************* 
    changed: [192.168.0.6]
    changed: [192.168.0.3]
    
    NOTIFIED: [network-hack-routes | run script] ********************************** 
    changed: [192.168.0.3]
    changed: [192.168.0.6]
    
    NOTIFIED: [network-hack-routes | remove script] ******************************* 
    changed: [192.168.0.3]
    changed: [192.168.0.6]
    
    PLAY RECAP ******************************************************************** 
    192.168.0.3                : ok=10   changed=7    unreachable=0    failed=0   
    192.168.0.6                : ok=10   changed=7    unreachable=0    failed=0  
    

    最后,在全部节点安装并配置kubernetes:

    ansible-playbook -i inventory setup.yml
    

    执行完毕后能够看到kube相关的服务都在执行了:

    # # 服务执行状态
    # ansible all -i inventory -k -a 'bash -c "systemctl | grep -i kube"'
    SSH password: 
    192.168.0.3 | success | rc=0 >>
    kube-proxy.service                                                                                     loaded active running   Kubernetes Kube-Proxy Server
    kubelet.service                                                                                        loaded active running   Kubernetes Kubelet Server
    
    192.168.0.7 | success | rc=0 >>
    kube-apiserver.service                                                      loaded active running   Kubernetes API Server
    kube-controller-manager.service                                             loaded active running   Kubernetes Controller Manager
    kube-scheduler.service                                                      loaded active running   Kubernetes Scheduler Plugin
    
    192.168.0.6 | success | rc=0 >>
    kube-proxy.service                                                                                     loaded active running   Kubernetes Kube-Proxy Server
    kubelet.service                                                                                        loaded active running   Kubernetes Kubelet Server
    
    
    # # 端口监听状态
    # ansible all -i inventory -k -a 'bash -c "netstat -tulnp | grep -E "(kube)|(etcd)""'
    SSH password: 
    192.168.0.7 | success | rc=0 >>
    tcp        0      0 192.168.0.7:7080        0.0.0.0:*               LISTEN      14486/kube-apiserve 
    tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      14544/kube-schedule 
    tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      14515/kube-controll 
    tcp6       0      0 :::7001                 :::*                    LISTEN      13986/etcd          
    tcp6       0      0 :::4001                 :::*                    LISTEN      13986/etcd          
    tcp6       0      0 :::8080                 :::*                    LISTEN      14486/kube-apiserve 
    
    192.168.0.3 | success | rc=0 >>
    tcp        0      0 192.168.0.3:10250       0.0.0.0:*               LISTEN      9500/kubelet        
    tcp6       0      0 :::46309                :::*                    LISTEN      9524/kube-proxy     
    tcp6       0      0 :::48500                :::*                    LISTEN      9524/kube-proxy     
    tcp6       0      0 :::38712                :::*                    LISTEN      9524/kube-proxy     
    
    192.168.0.6 | success | rc=0 >>
    tcp        0      0 192.168.0.6:10250       0.0.0.0:*               LISTEN      9474/kubelet        
    tcp6       0      0 :::52870                :::*                    LISTEN      9498/kube-proxy     
    tcp6       0      0 :::57961                :::*                    LISTEN      9498/kube-proxy     
    tcp6       0      0 :::40720                :::*                    LISTEN      9498/kube-proxy  
    

    执行以下的命令看看服务是否都是正常的

    # curl -s -L http://192.168.0.7:4001/version # check etcd
    etcd 0.4.6
    # curl -s -L http://192.168.0.7:8080/api/v1beta1/pods  | python -m json.tool # check apiserve
    {
        "apiVersion": "v1beta1",
        "creationTimestamp": null,
        "items": [],
        "kind": "PodList",
        "resourceVersion": 8,
        "selfLink": "/api/v1beta1/pods"
    }
    # curl -s -L http://192.168.0.7:8080/api/v1beta1/minions  | python -m json.tool # check apiserve
    # curl -s -L http://192.168.0.7:8080/api/v1beta1/services  | python -m json.tool # check apiserve
    # kubectl get minions
    NAME
    192.168.0.3
    192.168.0.6
    

    部署apache服务

    首先创建一个Pod:

    # cat ~/apache.json
    {
      "id": "fedoraapache",
      "kind": "Pod",
      "apiVersion": "v1beta1",
      "desiredState": {
        "manifest": {
          "version": "v1beta1",
          "id": "fedoraapache",
          "containers": [{
            "name": "fedoraapache",
            "image": "fedora/apache",
            "ports": [{
              "containerPort": 80,
              "hostPort": 80
            }]
          }]
        }
      },
      "labels": {
        "name": "fedoraapache"
      }
    }
    
    # kubectl create -f apache.json
    # kubectl get pod fedoraapache
    NAME                IMAGE(S)            HOST                LABELS              STATUS
    fedoraapache        fedora/apache       192.168.0.6/        name=fedoraapache   Waiting
    
    # #因为镜像下载较慢,因而Waiting持续的时间会比較久,等镜像下好后就会非常快起来了
    # kubectl get pod fedoraapache
    NAME                IMAGE(S)            HOST                LABELS              STATUS
    fedoraapache        fedora/apache       192.168.0.6/        name=fedoraapache   Running
    
    # #到192.168.0.6机器上看看容器状态
    # docker ps
    CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                NAMES
    77dd7fe1b24f        fedora/apache:latest      "/run-apache.sh"    31 minutes ago      Up 31 minutes                            k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0   
    1455249f2c7d        kubernetes/pause:latest   "/pause"            About an hour ago   Up About an hour    0.0.0.0:80->80/tcp   k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2  
    # docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
    fedora/apache       latest              2e11d8fd18b3        7 weeks ago         554.1 MB
    kubernetes/pause    latest              6c4579af347b        4 months ago        239.8 kB
    # iptables-save | grep 2.2
    -A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80
    -A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT
    # curl localhost  # 说明Pod启动OK了。而且端口也正常
    Apache
    

    Replication Controllers

    Replication Controllers保证足够数量的容器执行,以便均衡负载,并保证服务高可用:

    A replication controller combines a template for pod creation (a “cookie-cutter” if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.

    # cat replica.json 
    {
      "id": "apacheController",
      "kind": "ReplicationController",
      "apiVersion": "v1beta1",
      "labels": {"name": "fedoraapache"},
      "desiredState": {
        "replicas": 3,
        "replicaSelector": {"name": "fedoraapache"},
        "podTemplate": {
          "desiredState": {
             "manifest": {
               "version": "v1beta1",
               "id": "fedoraapache",
               "containers": [{
                 "name": "fedoraapache",
                 "image": "fedora/apache",
                 "ports": [{
                   "containerPort": 80,
                 }]
               }]
             }
           },
           "labels": {"name": "fedoraapache"},
          },
      }
    }
    
    # kubectl create -f replica.json 
    apacheController
    
    # kubectl get replicationController
    NAME                IMAGE(S)            SELECTOR            REPLICAS
    apacheController    fedora/apache       name=fedoraapache   3
    
    # kubectl get pod
    NAME                                   IMAGE(S)            HOST                LABELS              STATUS
    fedoraapache                           fedora/apache       192.168.0.6/        name=fedoraapache   Running
    cf6726ae-6fed-11e4-8a06-fa163e3873e1   fedora/apache       192.168.0.3/        name=fedoraapache   Running
    cf679152-6fed-11e4-8a06-fa163e3873e1   fedora/apache       192.168.0.3/        name=fedoraapache   Running
    

    能够看到。已经有三个容器在执行了。

    Services

    通过Replication Controllers已经有多个Pod在执行了。但因为每一个Pod都分配了不同的IP。而且随着系统执行这些IP地址有可能会变化,那问题来了,怎样从外部訪问这个服务呢?这就是service干的事情了。

    A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.

    As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.

    Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define “virtual” IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.

    # cat service.json 
    {
      "id": "fedoraapache",
      "kind": "Service",
      "apiVersion": "v1beta1",
      "selector": {
        "name": "fedoraapache",
      },
      "protocol": "TCP",
      "containerPort": 80,
      "port": 8987
    }
    # kubectl create -f service.json 
    fedoraapache
    # kubectl get service
    NAME                LABELS              SELECTOR                                  IP                  PORT
    kubernetes-ro                           component=apiserver,provider=kubernetes   10.254.0.2          80
    kubernetes                              component=apiserver,provider=kubernetes   10.254.0.1          443
    fedoraapache                            name=fedoraapache                         10.254.0.3          8987
    
    # # 切换到minion上
    # curl 10.254.0.3:8987
    Apache
    

    也能够为service配置一个公网IP,前提是要配置一个cloud provider。

    眼下支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。

    For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.

    注:对Openstack的支持是使用rackspace开源的github.com/rackspace/gophercloud来做的,

    Health Check

    Currently, there are three types of application health checks that you can choose from:
    HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
    Container Exec - The Kubelet will execute a command inside your container. If it returns “ok” it will be considered a success.
    * TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
    In all cases, if the Kubelet discovers a failure, the container is restarted.

    The container health checks are configured in the “LivenessProbe” section of your container config. There you can also specify an “initialDelaySeconds” that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

    Here is an example config for a pod with an HTTP health check:

    kind: Pod
    apiVersion: v1beta1
    desiredState:
    manifest:
    version: v1beta1
    id: php
    containers:
    - name: nginx
    image: dockerfile/nginx
    ports:
    - containerPort: 80
    # defines the health checking
    livenessProbe:
    # turn on application health checking
    enabled: true
    type: http
    # length of time to wait for a pod to initialize
    # after pod startup, before applying health checking
    initialDelaySeconds: 30
    # an http probe
    httpGet:
    path: /_status/healthz
    port: 8080

    References

    版权声明:本文博客原创文章,博客,未经同意,不得转载。

  • 相关阅读:
    10.25 测试
    ##2018-2019-1 20165327 《信息安全系统设计基础》第四周学习总结
    实验一 开发环境的熟悉
    ch03 课下作业——缓冲区溢出漏洞实验
    20165327 2018-2017-1 《信息安全系统设计基础》第三周学习总结
    week02 课堂作业
    第四周学习总结
    2018-2019-1 20165204 《信息安全系统设计基础》第三周学习总结
    2018-2019-1 20165204《信息安全系统设计基础》第二周学习总结
    2018-2019-1 20165204 《信息安全系统设计基础》第一周学习总结
  • 原文地址:https://www.cnblogs.com/zfyouxi/p/4631325.html
Copyright © 2020-2023  润新知