• Kubernetes——StatefulSet资源升级(滚动更新操作、暂存更新操作)


    StatefulSet资源升级

      自 Kubernetes 1.7 版本起,StatefulSet 资源支持自动更新机制,其更新策略由 spec.updateStrategy 字段定义,默认为 RollingUpdate,即滚动更新。

    一、滚动更新操作

      滚动更新 StatefulSet 控制器的 Pod 资源以逆序的形式从其最大索引编号的 Pod 资源逐一进行,它在终止一个 Pod 资源、更新资源并待其就绪后启动更新下一个资源,即索引号比当前号小 1 的 Pod 资源。对于主从复制类的集群应用来说,这样也能保证起主节点作用的 Pod 资源最后进行更新,确保兼容性。

      StatefulSet 的默认更新策略为滚动更新,通过 "kubectl get statefulset NAME -o yaml" 命令中的输出可以获取相关的信息:

    [root@k8s-master01-test-2-26 ~]# kubectl get statefulset alertmanager-main -n kubesphere-monitoring-system
    NAME                READY   AGE
    alertmanager-main   1/1     66m
    [root@k8s-master01-test-2-26 ~]# kubectl get statefulset alertmanager-main -n kubesphere-monitoring-system -o yaml
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      annotations:
        prometheus-operator-input-hash: "10566271792385409484"
      creationTimestamp: "2022-06-29T05:31:54Z"
      generation: 1
      labels:
        app.kubernetes.io/component: alert-router
        app.kubernetes.io/instance: main
        app.kubernetes.io/name: alertmanager
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 0.23.0
      name: alertmanager-main
      namespace: kubesphere-monitoring-system
      ownerReferences:
      - apiVersion: monitoring.coreos.com/v1
        blockOwnerDeletion: true
        controller: true
        kind: Alertmanager
        name: main
        uid: c054f193-4d87-49b8-b8e7-9275c753460c
      resourceVersion: "6347"
      uid: dbd3d926-fefd-4b11-94a6-079b08b7a293
    spec:
      podManagementPolicy: Parallel
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          alertmanager: main
          app.kubernetes.io/instance: main
          app.kubernetes.io/managed-by: prometheus-operator
          app.kubernetes.io/name: alertmanager
      serviceName: alertmanager-operated
      template:
        metadata:
          annotations:
            kubectl.kubernetes.io/default-container: alertmanager
          creationTimestamp: null
          labels:
            alertmanager: main
            app.kubernetes.io/component: alert-router
            app.kubernetes.io/instance: main
            app.kubernetes.io/managed-by: prometheus-operator
            app.kubernetes.io/name: alertmanager
            app.kubernetes.io/part-of: kube-prometheus
            app.kubernetes.io/version: 0.23.0
        spec:
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: alertmanager
                      operator: In
                      values:
                      - main
                  namespaces:
                  - kubesphere-monitoring-system
                  topologyKey: kubernetes.io/hostname
                weight: 100
          containers:
          - args:
            - --config.file=/etc/alertmanager/config/alertmanager.yaml
            - --storage.path=/alertmanager
            - --data.retention=120h
            - --cluster.listen-address=
            - --web.listen-address=:9093
            - --web.route-prefix=/
            - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094
            - --cluster.reconnect-timeout=5m
            env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            image: registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 10
              httpGet:
                path: /-/healthy
                port: web
                scheme: HTTP
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 3
            name: alertmanager
            ports:
            - containerPort: 9093
              name: web
              protocol: TCP
            - containerPort: 9094
              name: mesh-tcp
              protocol: TCP
            - containerPort: 9094
              name: mesh-udp
              protocol: UDP
            readinessProbe:
              failureThreshold: 10
              httpGet:
                path: /-/ready
                port: web
                scheme: HTTP
              initialDelaySeconds: 3
              periodSeconds: 5
              successThreshold: 1
              timeoutSeconds: 3
            resources:
              limits:
                cpu: 200m
                memory: 200Mi
              requests:
                cpu: 20m
                memory: 30Mi
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              readOnlyRootFilesystem: true
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: FallbackToLogsOnError
            volumeMounts:
            - mountPath: /etc/alertmanager/config
              name: config-volume
            - mountPath: /etc/alertmanager/certs
              name: tls-assets
              readOnly: true
            - mountPath: /alertmanager
              name: alertmanager-main-db
          - args:
            - --listen-address=:8080
            - --reload-url=http://localhost:9093/-/reload
            - --watched-dir=/etc/alertmanager/config
            command:
            - /bin/prometheus-config-reloader
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: SHARD
              value: "-1"
            image: registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
            imagePullPolicy: IfNotPresent
            name: config-reloader
            ports:
            - containerPort: 8080
              name: reloader-web
              protocol: TCP
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 100m
                memory: 50Mi
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              readOnlyRootFilesystem: true
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: FallbackToLogsOnError
            volumeMounts:
            - mountPath: /etc/alertmanager/config
              name: config-volume
              readOnly: true
          dnsPolicy: ClusterFirst
          nodeSelector:
            kubernetes.io/os: linux
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext:
            fsGroup: 2000
            runAsNonRoot: true
            runAsUser: 1000
          serviceAccount: alertmanager-main
          serviceAccountName: alertmanager-main
          terminationGracePeriodSeconds: 120
          volumes:
          - name: config-volume
            secret:
              defaultMode: 420
              secretName: alertmanager-main-generated
          - name: tls-assets
            projected:
              defaultMode: 420
              sources:
              - secret:
                  name: alertmanager-main-tls-assets-0
          - emptyDir: {}
            name: alertmanager-main-db
      updateStrategy:
        type: RollingUpdate
    status:
      availableReplicas: 1
      collisionCount: 0
      currentReplicas: 1
      currentRevision: alertmanager-main-cd5bc8fdc
      observedGeneration: 1
      readyReplicas: 1
      replicas: 1
      updateRevision: alertmanager-main-cd5bc8fdc
      updatedReplicas: 1
    [root@k8s-master01-test-2-26 ~]# 
    

      "kubectl rollout status" 命令跟踪 StatefulSet 资源滚动更新过程中的状态信息。

    二、暂存更新操作

      当用户需要设定一个更新操作,但又不希望它立即执行时,可将更新操作予以 "暂存",待条件满足后再手动触发其执行更新。

      StatefulSet 资源的分区更新机制能够实现此项功能。在设定更新操作之前,将 .spec.updateStrategy.rollingUpdate.partition 字段的值设置为 Pod 资源的副本数量,即比 Pod 资源的最大索引号大 1,这就意味着,所有的 Pod 资源都不会处于可直接更新的分区之内,那么于其后设定的更新操作也就不会真正执行,直到用户降低分区编号至现有 Pod 资源索引号范围之内。

    [root@k8s-master01-test-2-26 ~]# kubectl explain statefulset.spec.updateStrategy.rollingUpdate.partition
    KIND:     StatefulSet
    VERSION:  apps/v1
    
    FIELD:    partition <integer>
    
    DESCRIPTION:
         Partition indicates the ordinal at which the StatefulSet should be
         partitioned for updates. During a rolling update, all pods from ordinal
         Replicas-1 to Partition are updated. All pods from ordinal Partition-1 to 0
         remain untouched. This is helpful in being able to do a canary based
         deployment. The default value is 0.
    [root@k8s-master01-test-2-26 ~]# 
    

      下面测试滚动更新暂存更新操作,首先将 StatefulSet 资源滚动更新分区值设定为 3:

    kubectl patch statefulset myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":3}}}}'
    

      暂存状态的更新操作对所有的 Pod 资源均不产生影响,比如即使删除某 Pod 资源,它依然会基于旧的版本镜像进行重建。

  • 相关阅读:
    SNMP监控一些常用OID表的总结
    微信公众号开发(三)----服务号客服消息
    微信公众号开发(二)---验证服务与回复消息
    微信公众号开发(一)-----准备工作
    leveldb文章列表
    TinyIM流程之删除好友
    TinyIM流程之添加好友
    《软件创富----共享软件创业之道》读后感
    TinyIM流程之用户注销
    TinyIM流程之用户退出登录
  • 原文地址:https://www.cnblogs.com/zuoyang/p/16423483.html
Copyright © 2020-2023  润新知