• k8s系列---stateful(有状态应用副本集)控制器


    http://blog.itpub.net/28916011/viewspace-2215046/ 

    在应用程序中,可以分为有状态应用和无状态应用。 

        无状态的应用更关注于群体,任何一个成员都可以被取代。 

        对有状态的应用是关注个体。 

        像我们前面用deployment控制器管理的nginx、myapp等都属于无状态应用。 

        像mysql、redis,zookeeper等都属于有状态应用,他们有的还有主从之分、先后顺序之分。 

        statefulset控制器能实现有状态应用的管理,但实现起来也是非常麻烦的。需要把我们运维管理的过程写入脚本并注入到statefulset中才能使用。虽然互联网上有人做好了stateful的脚本,但是还是建议大家不要轻易的把redis、mysql等这样有状态的应用迁移到k8s上。

        在k8s中,statefulset主要管理一下特效的应用: 

            a)、每一个Pod稳定且有唯一的网络标识符;

            b)、稳定且持久的存储设备; 

            c)、要求有序、平滑的部署和扩展; 

            d)、要求有序、平滑的终止和删除; 

            e)、有序的滚动更新,应该先更新从节点,再更新主节点; 

         statefulset由三个组件组成:

            a) headless service(无头的服务,即没名字);

            b)statefulset控制器 

            c)volumeClaimTemplate(存储卷申请模板,因为每个pod要有专用存储卷,而不能共用存储卷) 

    [root@master ~]# kubectl explain sts   #stateful的简称
    

      

    创建之前删除之前创建的多余的pod和svc避免待会冲突出错,当然也可以不删,只不过yaml里有些是冲突的,自己得另行定义

    kubectl delete pods pod-vol-pvc
    kubectl delete pod pod-cm-3
    kubectl delete pods pod-secret-1
    kubectl delete deploy myapp-deploy
    kubectl delete deploy tomcat-deploy
    kubectl delete pvc mypvc
    kubectl delete pv --all
    kubectl delete svc myapp
    kubectl delete svc tomcat
    

      

    然后重新生成pv

    [root@master volumes]# cat pv-demo.yaml 
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv001
      labels:
        name: pv001
    spec:
      nfs:
        path: /data/volumes/v1
        server: 172.16.100.64
      accessModes: ["ReadWriteMany","ReadWriteOnce"]
      capacity:
        storage: 5Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv002
      labels:
        name: pv002
    spec:
      nfs:
        path: /data/volumes/v2
        server: 172.16.100.64
      accessModes: ["ReadWriteOnce"]
      capacity:
        storage: 5Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv003
      labels:
        name: pv003
    spec:
      nfs:
        path: /data/volumes/v3
        server: 172.16.100.64
      accessModes: ["ReadWriteMany","ReadWriteOnce"]
      capacity:
        storage: 5Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv004
      labels:
        name: pv004
    spec:
      nfs:
        path: /data/volumes/v4
        server: 172.16.100.64
      accessModes: ["ReadWriteMany","ReadWriteOnce"]
      capacity:
        storage: 5Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv005
      labels:
        name: pv005
    spec:
      nfs:
        path: /data/volumes/v5
        server: 172.16.100.64
      accessModes: ["ReadWriteMany","ReadWriteOnce"]
      capacity:
        storage: 9Gi
    

      

    [root@master stateful]# cat stateful-demo.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myapp-svc
      labels:
        app: myapp-svc
    spec:
      ports:
      - port: 80
        name: web
      clusterIP: None
      selector:
        app: myapp-pod
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: myapp
    spec:
      serviceName: myapp-svc
      replicas: 2
      selector:
        matchLabels:
          app: myapp-pod
      template:
        metadata:
          labels:
            app: myapp-pod
        spec:
          containers:
          - name: myapp
            image: ikubernetes/myapp:v1
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: myappdata
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates: #存储卷申请模板,可以为每个pod定义volume;可以为pod所在的名称空间自动创建pvc。
      - metadata:
          name: myappdata
        spec:
          accessModes: ["ReadWriteOnce"]
          #storageClassName: "gluster-dynamic"
          resources:
            requests:
              storage: 5Gi #2G的pvc
    

      

    [root@master stateful]# kubectl apply -f stateful-demo.yaml 
    service/myapp-svc unchanged
    statefulset.apps/myapp created
    

      

    [root@master stateful]# kubectl get svc
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    myapp-svc    ClusterIP   None            <none>        80/TCP              12m
    

      

        看到myapp-svc是无头服务。

    [root@master stateful]# kubectl get sts
    NAME      DESIRED   CURRENT   AGE
    myapp     2         2         6m
    

      

    [root@master stateful]# kubectl get pvc
    NAME                STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    myappdata-myapp-0   Bound     pv002     2Gi        RWO                           3s
    myappdata-myapp-1   Bound     pv003     1Gi        RWO,RWX                       1s
    

      

    [root@master stateful]# kubectl get pv
    NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                       STORAGECLASS   REASON    AGE
    pv001     1Gi        RWO,RWX        Retain           Available                                                        1d
    pv002     2Gi        RWO            Retain           Bound       default/myappdata-myapp-0                            1d
    pv003     1Gi        RWO,RWX        Retain           Bound       default/myappdata-myapp-1                            1d
    pv004     1Gi        RWO,RWX        Retain           Bound       default/mypvc                                        1d
    pv005     1Gi        RWO,RWX        Retain           Available
    

      

    [root@master stateful]# kubectl get pods
    NAME                             READY     STATUS             RESTARTS   AGE
    myapp-0                          1/1       Running            0          4m
    myapp-1                          1/1       Running            0          4m
    

      

    [root@master stateful]# kubectl delete -f stateful-demo.yaml 
    service "myapp-svc" deleted
    statefulset.apps "myapp" deleted
    

      

        上面删除会使pod和service删除,但是pvc是不会删除,所以还能恢复。 

    [root@master stateful]# kubectl exec -it myapp-0 -- /bin/sh
    / # nslookup myapp-0.myapp-svc.default.svc.cluster.local
    nslookup: can't resolve '(null)': Name does not resolve
    Name:      myapp-0.myapp-svc.default.svc.cluster.local
    Address 1: 10.244.1.110 myapp-0.myapp-svc.default.svc.cluster.local
    / # 
    / # 
    / # nslookup myapp-1.myapp-svc.default.svc.cluster.local
    nslookup: can't resolve '(null)': Name does not resolve
    Name:      myapp-1.myapp-svc.default.svc.cluster.local
    Address 1: 10.244.2.97 myapp-1.myapp-svc.default.svc.cluster.local
    

      

     myapp-0.myapp-svc.default.svc.cluster.local

        格式为:pod_name.service_name.namespace.svc.cluster.local   

        下面扩展myapp pod为5个: 

    [root@master stateful]# kubectl scale sts myapp --replicas=5
    statefulset.apps/myapp scaled
    

      

    [root@master stateful]# kubectl get pods
    NAME                             READY     STATUS             RESTARTS   AGE
    client                           0/1       Error              0          17d
    myapp-0                          1/1       Running            0          37m
    myapp-1                          1/1       Running            0          37m
    myapp-2                          1/1       Running            0          46s
    myapp-3                          1/1       Running            0          43s
    myapp-4                          0/1       Pending            0          41s
    

      

    [root@master stateful]# kubectl get pvc
    NAME                STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    myappdata-myapp-0   Bound     pv002     2Gi        RWO                           52m
    myappdata-myapp-1   Bound     pv003     1Gi        RWO,RWX                       52m
    myappdata-myapp-2   Bound     pv005     1Gi        RWO,RWX                       2m
    myappdata-myapp-3   Bound     pv001     1Gi        RWO,RWX                       2m
    myappdata-myapp-4   Pending                                                      2m
    

      

        另外也可以用patch打补丁的方法来进行扩容和缩容: 

    [root@master stateful]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}'
    statefulset.apps/myapp patched
    

      

        下面我们再来介绍一下滚动更新。

    [root@master stateful]# kubectl explain sts.spec.updateStrategy.rollingUpdate
    

      

       假设有4个pod(pod0,pod1,pod2,pod3),如果设置partition为5,那么说明大于等于5的pod更新,我们四个Pod就都不更新;如果partition为4,那么说明大于等于4的pod更新,即pod3更新,其他pod都不更新;如果partiton为3,那么说明大于等于3的pod更新,那么就是pod2和pod3更新,其他pod都不更新。 

    [root@master stateful]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
    statefulset.apps/myapp patched
    

      

    1.13和视频中partition是不一样,视频版本1.11??出现的是4,1.13.怎么该也不是4显示的是一大串数字。,但是更新是按照上面的策略更新的

    [root@master stateful]# kubectl describe sts myapp
    Update Strategy:    RollingUpdate
    Partition:        4
    

      

        下面把myapp升级为v2版本 

    [root@master stateful]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
    statefulset.apps/myapp image updated
    [root@master ~]# kubectl get sts -o wide
    NAME      DESIRED   CURRENT   AGE       CONTAINERS   IMAGES
    myapp     2         2         1h        myapp        ikubernetes/myapp:v2
    [root@master ~]# kubectl get pods myapp-4 -o yaml
     containerStatuses:
      - containerID: docker://898714f2e5bf4f642e2a908e7da67eebf6d3074c89bbd0d798d191a2061a3115
        image: ikubernetes/myapp:v2
    

      

        可以看到pod myapp-4使用的模板版本是v2了。 

  • 相关阅读:
    composer在phpstorm中安装代码库
    [C#]WinForm 中 comboBox控件之数据绑定
    【建模+强连通分量】POJ1904 King's Quest
    【构造】UVa 11387 The 3-Regular Graph
    【环套树+树形dp】Bzoj1040 [ZJOI2008] 骑士
    【强连通分量+spfa】Bzoj1179 Apio2009 Atm
    【树形dp】Bzoj3391 [Usaco2004 Dec]Tree Cutting网络破坏
    【dfs+连通分量】Bzoj1123 POI2008 BLO
    【强连通分量+概率】Bzoj2438 杀人游戏
    【强连通分量】Bzoj1194 HNOI2006 潘多拉的盒子
  • 原文地址:https://www.cnblogs.com/dribs/p/10307894.html
Copyright © 2020-2023  润新知