• k8s之statefulset控制器


    operator:

    statefulset:有状态副本集

    特点

    运行在: 1,稳定且唯一的网络标识符

    2,稳定且持久的存储

    3,有序,平滑地部署和扩展

    4,有序,平滑地删除和终止

    5,有序的滚动更新

     

    三个组件:headless service(无头服务), statefuleset,volumeClaimTemplate(存储卷申请模板)

    先准备pv

    apiVersion: v1

    kind: PersistentVolume

    metadata:

      name: pv001

      labels:

        name: pv001

        polity: fast

    spec:

      nfs:

        path: /data/volumes/v1

        server: node2

      accessModes: ["ReadWriteMany","ReadWriteOnce"]

      capacity:

        storage: 5Gi

    ---

    apiVersion: v1

    kind: PersistentVolume

    metadata:

      name: pv002

      labels:

        name: pv002

        polity: fast

    spec:

      nfs:

        path: /data/volumes/v2

        server: node2

      accessModes: ["ReadWriteOnce"]

      capacity:

        storage: 5Gi

    ---

     

    apiVersion: v1

    kind: PersistentVolume

    metadata:

      name: pv003

      labels:

        name: pv003

        polity: fast

    spec:

      nfs:

        path: /data/volumes/v3

        server: node2

      accessModes: ["ReadWriteMany","ReadWriteOnce"]

      capacity:

        storage: 5Gi

    ---

     

    apiVersion: v1

    kind: PersistentVolume

    metadata:

      name: pv004

      labels:

        name: pv004

        polity: fast

    spec:

      nfs:

        path: /data/volumes/v4

        server: node2

      accessModes: ["ReadWriteMany","ReadWriteOnce"]

      capacity:

        storage: 10Gi

    ---

     

    apiVersion: v1

    kind: PersistentVolume

    metadata:

      name: pv005

      labels:

        name: pv005

        polity: fast

    spec:

      nfs:

        path: /data/volumes/v5

        server: node2

      accessModes: ["ReadWriteMany","ReadWriteOnce"]

      capacity:

        storage: 10Gi

     

     

    kubectl apply -f pv-demo.yaml

    kubectl get pv

     

     

    实例

    apiVersion: v1

    kind: Service

    metadata:

      name: myapp-svc  service

      labels:

        app: myapp

    spec:

      ports:

      - port: 80  service端口

       name: web  service端口名

      clusterIP: None  statefulset要求无头服务

      selector:  pod关联的标签

        app: myapp-pod

    ---

    apiVersion: apps/v1

    kind: StatefulSet

    metadata:

      name: myapp  statefulset控制器名 创建的pod名也为这个

    spec:

      serviceName: myapp-svc 关联的service服务名,必须是无头服务

      replicas: 2

      selector:  管理哪些pod,关联podlabel

        matchLabels:

          app: myapp-pod

      template: 定义pod模板

        metadata:

          labels:  定义的pod的标签label

            app: myapp-pod

       spec:

         containers:

         - name: myapp  pod中容器名

          image: ikubernetes/myapp:v1

          ports:

          - containerPort: 80

           name: web

          volumeMounts:

           - name: myappdata 挂载myappdata存储卷

             mountPath: /usr/share/nginx/html 容器中挂载的路径

      volumeClaimTemplates:   pvc模板 为每个pod定义volume 自动创建pvc

      - metadata:

          name: myappdata   定义的pvc

        spec:

          accessModes: ["ReadWriteOnce"]  单路读写

          resources:   资源

            requests: 请求

              storage: 5Gi 大小5Gi存储空间

     

     

    创建

    kubectl explain sts

    kubectl apply -f stateful-demo.yaml

     

     

    验证:

    kubectl get sts

    kubectl get pvc

    kubectl get svc

    kubectl get pv

    kubectl get pods

     

    删除sts

    逆向删除pod

    kubectl delete -f stateful-demo.yaml

    删除时pvc还在,且一直保留给固定的pod

     

    statefulset支持滚动更新,规模扩展

    逆向更新

     

    dns解析

    kubectl exec -it myapp-0 -- /bin/sh

    nslookup  myapp-3.myapp-svc.default.svc.cluster.local

    域名构成   podservice名 命名空间 集群域名svc.cluster.local        

    域名 pod_name.service_name.namaspace_name.svc.cluster.local

    nslookup  myapp-3.myapp-svc

     

     

    扩容宿容

    kubectl scale sts myapp --replicas=3

    kubectl patch sts myapp -p '{"spec":{"replicas":2}}'

     

     

    更新策略

    kubectl explain sts.spec.updateStrategy

    kubectl explain sts.spec.updateStrategy.rollingUpdate

     

     

    分区更新

    kubectl explain sts.spec.updateStrategy.rollingUpdate.partition

    myapp-0

    myapp-1

    myapp-2

    myapp-3

    myapp-4

     

    partition:N

    N>=3

    即更新34,即myapp-3,myapp-4 这叫金丝雀发布

    验证

    方法一

    kubectl patch sts myapp -p '{"spec":{"replicas":5}}'

    kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' 打上补丁 partition>=4

    kubectl describe sts myapp 查看更新策略

    kubectl set image sts/myapp myapp=ikubernetes/myapp:v2 改变镜像就会更新了

    kubectl get sts -o wide

     

    方法二

    vim stateful-demo.yaml

    kind: StatefulSet

       ...

    spec:

      updateStrategy:

        rollingUpdate:

          partition: 3

     

     kubectl apply -f stateful-demo.yaml

     

    如果版本没问题,就全部更新

    vim stateful-demo.yaml

    kind: StatefulSet

       ...

    spec:

      updateStrategy:

        rollingUpdate:

          partition: 0

     

    kubectl apply -f stateful-demo.yaml 

  • 相关阅读:
    Laravel 学习笔记之文件上传
    Composer学习
    Laravel 学习笔记之数据库操作——Eloquent ORM
    PHP至Document类操作 xml 文件
    使用promise构建一个向服务器异步数据请求
    遍历DOM树
    关于tp验证码模块
    layui 封装自定义模块
    js进阶之路,关于UI资源的优化(转载)
    关于js 重载
  • 原文地址:https://www.cnblogs.com/leiwenbin627/p/11317274.html
Copyright © 2020-2023  润新知