• Kubernetes -- Horizontal Pod Autoscaler


    前言

    在kubernetes中,我们使用pod对外提供服务。这时候,我们需要以下两种情形需要关注:

    • pod因为不明原因挂掉,导致服务不可用

    • Pod在高负荷的情况下,不能支撑我们的服务

    如果我们人工监控pods,人工进行调整副本那么这个工作量无疑是巨大的,但kubernetes已经有了相应的机制来应对了。

    那么今天就来介绍一下在k8s 1.6中的弹性伸缩的实施

    k8s是kubernetes的官方简称
    HPA全称Horizontal Pod Autoscaler

    HPA的原理

    Kubernetes有一个HPA(Horizontal Pod Autoscaler)的资源,可以实现基于CPU使用率的Pod自动伸缩的功能。HPA基于Master Node上的kube-controller-manager服务启动参数–horizontal-pod-autoscaler-sync-period定义的时长(默认为30秒),周期性的检测Pod的CPU使用率(需要事先安装heapster)。如果需要设置–horizontal-pod-autoscaler-sync-period可以在Master Node上的/etc/default/kube-controller-manager中修改。

    安装Heapster

    K8S从1.8版本开始,CPU、内存等资源的metrics信息可以通过 Metrics API来获取,用户可以直接获取这些metrics信息(例如通过执行kubect top命令),HPA使用这些metics信息来实现动态伸缩,但是在之前我们使用Heapster来收集节点的相关数据

    导入相关镜像

    我们在实施的时候一般会创建/data目录,把所有的deployment放在此目录下,因此在k8s master创建kube-system目录

    [root@master data]# mkdir kube-system
    
    上传相镜像,并导入
    # 导入heasper
    [root@master kube-system]# docker load < heapster_3.tar 
    38ac8d0f5bb3: Loading layer [==================================================>]  1.312MB/1.312MB
    388f58c4d5b0: Loading layer [==================================================>]  99.87MB/99.87MB
    c6772246bc46: Loading layer [==================================================>]  281.1kB/281.1kB
    Loaded image: registry.cn-hangzhou.aliyuncs.com/lczean/heapster-amd64-v1.3.0-beta.1:v1.3.0-beta.1
    # 导入influxdb数据库
    [root@master kube-system]# docker load < influxdb13.tar 
    7da815924651: Loading layer [==================================================>]  10.48MB/10.48MB
    2d447b9e914f: Loading layer [==================================================>]   5.12kB/5.12kB
    Loaded image: registry.cn-hangzhou.aliyuncs.com/golden/heapster-influxdb-amd64:latest
    
    

    查看导入images

    [root@master kube-system]# docker images |grep heapster
    registry.cn-hangzhou.aliyuncs.com/lczean/heapster-amd64-v1.3.0-beta.1   v1.3.0-beta.1       6393b81e2220        17 months ago       101MB
    registry.cn-hangzhou.aliyuncs.com/golden/heapster-influxdb-amd64        latest              d3fccbedd180        22 months ago       11.6MB
    

    修改images tag以便我们可以导入到私有registry中

    [root@master kube-system]# docker tag registry.cn-hangzhou.aliyuncs.com/lczean/heapster-amd64-v1.3.0-beta.1:v1.3.0-beta.1  registry.k8s.osc:5000/heapster:v1.3.0
    [root@master kube-system]# docker tag registry.cn-hangzhou.aliyuncs.com/golden/heapster-influxdb-amd64 registry.k8s.osc:5000/heapster-influxdb
    # 查看修改后的images
    [root@master kube-system]# docker images |grep heapster
    registry.cn-hangzhou.aliyuncs.com/lczean/heapster-amd64-v1.3.0-beta.1   v1.3.0-beta.1       6393b81e2220        17 months ago       101MB
    registry.k8s.osc:5000/heapster                                          v1.3.0              6393b81e2220        17 months ago       101MB
    registry.cn-hangzhou.aliyuncs.com/golden/heapster-influxdb-amd64        latest              d3fccbedd180        22 months ago       11.6MB
    registry.k8s.osc:5000/heapster-influxdb                                 latest              d3fccbedd180        22 months ago       11.6MB
    
    推送到私有仓库
    [root@master kube-system]# docker push registry.k8s.osc:5000/heapster:v1.3.0
    The push refers to repository [registry.k8s.osc:5000/heapster]
    c6772246bc46: Pushed 
    388f58c4d5b0: Pushed 
    38ac8d0f5bb3: Pushed 
    v1.3.0: digest: sha256:e23b30d2e131e042eec9b5fdc30af905b63e454d140dc335246e74a4e8b4c857 size: 949
    [root@master kube-system]# docker push registry.k8s.osc:5000/heapster-influxdb 
    The push refers to repository [registry.k8s.osc:5000/heapster-influxdb]
    2d447b9e914f: Pushed 
    7da815924651: Pushed 
    38ac8d0f5bb3: Mounted from heapster 
    latest: digest: sha256:d2ecd285eb6585d56e8853da7b9fd8f4a57de4a3006f6720173a3f3942c0e7c9 size: 945
    

    influxdb时间序列库介绍

    创建deployment

    
    [root@master kube-system]# vim influxdb-deployment.yaml
    [root@master kube-system]# vim influxdb-service.yaml
    [root@master kube-system]# vim heapster-deployment.yaml
    [root@master kube-system]# vim heapster-service.yaml   
    

    分别看一下yaml:
    influxdb-deployment.yaml
    修改image

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: monitoring-influxdb
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: influxdb
        spec:
          volumes:
          - name: influxdb-storage
            emptyDir: {}
          containers:
          - name: influxdb
            image: registry.k8s.osc:5000/heapster-influxdb
            volumeMounts:
            - mountPath: /data
              name: influxdb-storage
    

    influxdb-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        task: monitoring
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: monitoring-influxdb
      name: monitoring-influxdb
      namespace: kube-system
    spec:
      ports:
      - name: http
        port: 8083
        targetPort: 8083
      - name: api
        port: 8086
        targetPort: 8086
      selector:
        k8s-app: influxdb
    

    创建deployment、service

    [root@master kube-system]# kubectl create -f influxdb-deployment.yaml 
    [root@master kube-system]# kubectl create -f influxdb-service.yaml 
    

    安装这两个后查看influxdb坐在的pod ip

    [root@master kube-system]# kubectl get pods  -n kube-system -o wide
    NAME                                   READY     STATUS    RESTARTS   AGE       IP            NODE
    monitoring-influxdb-3696415694-q9tds   1/1       Running   0          16m       172.99.39.6   172.16.187.158
    

    测试安装正常,再安装flanneld的node访问以下链接,如果无报错说明安装成功

    [root@node0 ~]# curl http://172.99.39.6:8086/ping
    

    创建heapster-deployment.yaml
    修改image、--source、--sink

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: heapster
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: heapster
            version: v6
        spec:
          containers:
          - name: heapster
            image: registry.k8s.osc:5000/heapster:v1.3.0
            imagePullPolicy: Always
            command:
            - /heapster
            - --source=kubernetes:http://172.16.187.162:8080
            - --sink=influxdb:http://172.99.39.6:8086
    

    创建heapster-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        task: monitoring  
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: Heapster
      name: heapster
      namespace: kube-system
    spec:
      ports:
      - port: 80
        targetPort: 8082
      selector:
        k8s-app: heapster
    

    创建heapster的deployment、service

    [root@master kube-system]# kubectl create -f heapster-deployment.yaml 
    deployment "heapster" created
    [root@master kube-system]# kubectl create -f heapster-service.yaml 
    service "heapster" created
    

    全部安装后可以查看日志是否是正常启动的

    [root@master kube-system]# kubectl get pods  -n kube-system -o wide   
    NAME                                   READY     STATUS    RESTARTS   AGE       IP             NODE
    heapster-1258036176-sjg7s              1/1       Running   0          1m        172.99.93.13   172.16.187.160
    monitoring-influxdb-3696415694-q9tds   1/1       Running   0          26m       172.99.39.6    172.16.187.158
    
    
    [root@master kube-system]# kubectl logs -f monitoring-influxdb-3696415694-q9tds  -n kube-system                         
    
     8888888           .d888 888                   8888888b.  888888b.
       888            d88P"  888                   888  "Y88b 888  "88b
       888            888    888                   888    888 888  .88P
       888   88888b.  888888 888 888  888 888  888 888    888 8888888K.
       888   888 "88b 888    888 888  888  Y8bd8P' 888    888 888  "Y88b
       888   888  888 888    888 888  888   X88K   888    888 888    888
       888   888  888 888    888 Y88b 888 .d8""8b. 888  .d88P 888   d88P
     8888888 888  888 888    888  "Y88888 888  888 8888888P"  8888888P"
    
    [run] 2018/12/07 05:27:33 InfluxDB starting, version unknown, branch unknown, commit unknown
    [run] 2018/12/07 05:27:33 Go version go1.7.4, GOMAXPROCS set to 16
    [run] 2018/12/07 05:27:33 Using configuration at: /etc/config.toml
    [store] 2018/12/07 05:27:33 Using data dir: /data/data
    [subscriber] 2018/12/07 05:27:33 opened service
    [monitor] 2018/12/07 05:27:33 Starting monitor system
    [monitor] 2018/12/07 05:27:33 'build' registered for diagnostics monitoring
    [monitor] 2018/12/07 05:27:33 'runtime' registered for diagnostics monitoring
    [monitor] 2018/12/07 05:27:33 'network' registered for diagnostics monitoring
    [monitor] 2018/12/07 05:27:33 'system' registered for diagnostics monitoring
    [shard-precreation] 2018/12/07 05:27:33 Starting precreation service with check interval of 10m0s, advance period of 30m0s
    [snapshot] 2018/12/07 05:27:33 Starting snapshot service
    [continuous_querier] 2018/12/07 05:27:33 Starting continuous query service
    [httpd] 2018/12/07 05:27:33 Starting HTTP service
    [httpd] 2018/12/07 05:27:33 Authentication enabled: false
    
    ## heapster
    
    [root@master kube-system]# kubectl logs -f heapster-1258036176-sjg7s  -n kube-system
    I1207 05:53:00.275512       1 heapster.go:71] /heapster --source=kubernetes:http://172.16.187.162:8080 --sink=influxdb:http://172.99.39.6:8086
    I1207 05:53:00.275568       1 heapster.go:72] Heapster version v1.3.0-beta.1
    I1207 05:53:00.275794       1 configs.go:61] Using Kubernetes client with master "http://172.16.187.162:8080" and version v1
    I1207 05:53:00.275816       1 configs.go:62] Using kubelet port 10255
    I1207 05:53:00.283647       1 influxdb.go:252] created influxdb sink with options: host:172.99.39.6:8086 user:root db:k8s
    I1207 05:53:00.283680       1 heapster.go:193] Starting with InfluxDB Sink
    I1207 05:53:00.283687       1 heapster.go:193] Starting with Metric Sink
    I1207 05:53:00.294214       1 heapster.go:105] Starting heapster on port 8082
    I1207 05:54:05.082812       1 influxdb.go:215] Created database "k8s" on influxDB server at "172.99.39.6:8086"
    

    最后查看heapster,由于收集数据需要时间,过一段时间后,查看节点的node的监控数据

    [root@master ~]# kubectl top node
    NAME             CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   
    172.16.187.158   121m         0%        19721Mi         30%       
    172.16.187.159   112m         0%        15805Mi         24%       
    172.16.187.160   172m         1%        28090Mi         43%       
    

    创建HPA

    以上步骤都成功的时候,我们可以创建HorizontalPodAutoscaler来管理,下面就用ms-wechat来进行测试

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
      name: ms-wechat  # 名称
      namespace: default #k8s命名空间
    spec:
      maxReplicas: 10  # 最大副本数
      minReplicas: 3   # 最小副本数
      scaleTargetRef:   
        apiVersion: apps/v1beta1
        kind: Deployment  
        name: ms-wechat   # 监控名为ms-wechat的Deployment
      targetCPUUtilizationPercentage: 80  # cpu 阈值
    

    查看hpa

    [root@master ~]# kubectl get hpa 
    NAME        REFERENCE              TARGETS           MINPODS   MAXPODS   REPLICAS   AGE
    ms-wechat   Deployment/ms-wechat   <unknown> / 80%   3         10        3          10m
    

    大家看到 targets为unknown有两种原因

    • 查看原始deployment的resource有没有设置cpu的限制如果没有:kubectl set resources deployment/ms-wechat --limits=cpu=2000m动态设置
    • 等一段时间再查看

    查看结果

    [root@master ~]# kubectl get hpa
    NAME        REFERENCE              TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
    ms-wechat   Deployment/ms-wechat   47% / 80%   3         10        3          11m
    

    可以进行压力测试,观察REPLICAS变化

    转载:https://www.jianshu.com/p/31ed5c98648e

  • 相关阅读:
    【力扣】767. 重构字符串
    【力扣】976. 三角形的最大周长
    【力扣】164. 最大间距
    【力扣】454. 四数相加 II
    JS中,输出1-10之间的随机整数
    web移动端浮层滚动阻止window窗体滚动JS/CSS处理
    禁止网站F12和查看源码
    苹果浏览器移动端click事件延迟300ms的原因以及解决办法
    jQuery下锚点的平滑跳转
    js实现placehoider效果
  • 原文地址:https://www.cnblogs.com/Eleven-Liu/p/11503702.html
Copyright © 2020-2023  润新知