• 监控,水平升宿


    监控:需要先安装HeapSter组件

    资源指标:metrics-server

    自定义指标:prometheus  k8s-prometheus-adapter

    自定义资源定义

    开发apiserver 服务器

    资源指标api

    新一代架构:

    核心指标流水线:kubelet,metrics-server以及由API server提供的api组成;提供cpu累计使用使用率,内存实时使用率,pod资源的占用率,以及容器的磁盘占用率;

    监控流水线:用于从系统收集各种指标数据并提供终端用户,存储,系统以及HPA,它们包含核心指标以及许多非核心指标,非核心指标不能被k8s所解析

    metrics-server:API server

     /apis/metrics.k8s.io/v1beta1

    部署metrics-server 获取核心指标

    https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B

    git clone https://github.com/kubernetes-incubator/metrics-server.git

    cd /root/metrics/metrics-server-master/deploy/1.8+

    metrics-server-deployment.yaml

    镜像地址改为: mirrorgooglecontainers/metrics-server-amd64:v0.3.3

    vim metrics-server-deployment.yaml

    ---

    apiVersion: v1

    kind: ServiceAccount

    metadata:

      name: metrics-server

      namespace: kube-system

    ---

    apiVersion: extensions/v1beta1

    kind: Deployment

    metadata:

      name: metrics-server

      namespace: kube-system

      labels:

        k8s-app: metrics-server

    spec:

      selector:

        matchLabels:

          k8s-app: metrics-server

      template:

        metadata:

          name: metrics-server

          labels:

            k8s-app: metrics-server

        spec:

          serviceAccountName: metrics-server

          volumes:

          # mount in tmp so we can safely use from-scratch images and/or read-only containers

          - name: tmp-dir

            emptyDir: {}

          containers:

          - name: metrics-server

            image: mirrorgooglecontainers/metrics-server-amd64:v0.3.3 修改镜像地址

            imagePullPolicy: Always

            command:      新加

            - /metrics-server  新加

            - --metric-resolution=30s  新加

            - --kubelet-insecure-tls  新加

            - --kubelet-preferred-address-types=InternalIP 新加

            volumeMounts:

            - name: tmp-dir

              mountPath: /tmp

    vim resource-reader.yaml

    rules:

    - apiGroups:

      - ""

      resources:

      - pods

      - nodes

      - nodes/stats

      - namespaces

      verbs:

      - get

      - list

      - watch

    kubectl apply -f ./

    kubectl get pods -n kube-system  -o wide

    kubectl describe pods -n kube-system metrics-server-95cc6867b-nm8g4

    kubectl api-versions 会出现metrics.k8s.io/v1beta1

    使用

    打开反向代理

    kubectl proxy --port=8080

    打开另一个终端获取数据

    curl http://localhost:8080/apis/metrics.k8s.io/v1beta1

    获取node

     curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes

    获取pod

    curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods

    直接命令

    kubectl top nodes node1

    kubectl top pod --all-namespaces

    prometheus监控

    容器日志:/var/logs/containers

    prometheus架构 自身就是一个数据库

    node_exporte --> prometheusr <--> PromQL <-- kube-state-metrics <-- custom metrics api (k8s-prometheus-adapater)

    部署

    官方部署https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus

    实际部署https://github.com/ikubernetes/k8s-prom

    过程

    mkdir prometheus && cd prometheus/

    unzip k8s-prom-master.zip  && cd k8s-prom-master/

    创建名称空间

    kubectl apply -f namespace.yaml

    创建node_exporter 客户端agent

    cd node_exporter/

    kubectl apply -f ./

    创建prometheus 主体

    cd prometheus/

    kubectl apply -f ./

    测试访问 http://192.168.81.10:30090/graph

    创建kube-state-metrics  将数据转换成k8s能够识别的格式

    cd kube-state-metrics/

    修改镜像地址vim kube-state-metrics-deploy.yaml

    mirrorgooglecontainers/kube-state-metrics-amd64:v1.3.1

    kubectl apply -f ./

    创建k8s-prometheus-adapter 是metric的apiserver

    cd k8s-prometheus-adapter/

    下载https://github.com/DirectXMan12/k8s-prometheus-adapter/tree/master/deploy/manifests 替换k8s-prometheus-adapter目录,将名称空间custom-metrics替换为prom

    然后apply这个目录所有文件

    kubectl exec -it   -n prom custom-metrics-apiserver-fd48dd8c-khrkn -- /bin/sh

    kubectl api-versions

    custom.metrics.k8s.io/v1beta1

    kubectl proxy --port=8080

    另一终端

    curl http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1/

    整合grafana

    cp grafana.yaml ../prometheus/k8s-prom-master

    vim grafana.yaml

    namespace: prom

    service

      namespace: prom

    nodePort: 32002

    #- name: INFLUXDB_HOST

    #  value: monitoring-influxdb 注释influxdb

    kubectl apply -f grafana.yam

    设置grafana

    Name:Prometheus

    Type:Prometheus

    URL http://prometheus.prom.svc:9090  

    就可以save test

    模板下载

    https://grafana.com/grafana/dashboards/8588 -->download_json

    daboard主页 +号 --> Import -->upload json -->选择数据源

    建议下载Kubernetes Cluster (Prometheus) 模板

    HPA 运用规模水平自动升宿

    kubectl explain hpa

    kubectl explain hpa.spec

    kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi'  

    --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80

                                                                暴露端口

    自动升宿

    kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60

           最少pod 最多pod cpu最多使用60%

    kubectl get hpa                                         

    压测:

    yum install httpd-tools -y

    ab -c 100 -n 50000 http://10.96.150.45/index.html

    kubectl describe hpa

    kubectl delete deployments.apps myapp

    kubectl delete pod pod-demo --force --grace-period=0  强制删除pod

    使用hpa2

    kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi'  

    --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80

    vim hpa-v2-demo.yaml

    apiVersion: autoscaling/v2beta1

    kind: HorizontalPodAutoscaler

    metadata:

      name: myapp-hpa-v2-2

    spec:  规格

      scaleTargetRef:  引用什么指标来定义  

        apiVersion: apps/v1 对谁来扩展

        kind: Deployment deploy来扩展

        name: myapp  deploy

      minReplicas: 1 最少

      maxReplicas: 10 最多

      metrics: 依据哪些指标评估

      - type: Resource 基于资源

       resource: 资源

         name: CPU 基于cpu

         targetAverageUtilization: 55 超过60%就扩展

      - type: Resource

       resource:

         name: memory 基于内存

         targetAverageValue: 50Mi 超过50Mi就扩展

    kubectl apply -f  hpa-v2-demo.yaml

    kubectl get hpa

    进行压测

    ab -c 100 -n 50000 http://10.96.150.45/index.html

        -c 每个命令发送100次请求   -n 总共发送多少请求

    hpa2

    kubectl run myapp-custom --image=ikubernetes/metrics-app --replicas=1 --requests='cpu=50m,memory=256Mi'  --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80

    vim hpa-v2-custom.yaml

    apiVersion: autoscaling/v2beta1

    kind: HorizontalPodAutoscaler

    metadata:

      name: myapp-hpa-v2

    spec:

      scaleTargetRef:

        apiVersion: apps/v1

        kind: Deployment

        name: myapp-custom

      minReplicas: 1

      maxReplicas: 10

      metrics:

      - type: Pods  pods指标输出

        pods:

          metricName: http_requests 自定义指标 http_requests并发数

          targetAverageValue: 800m  超过800个并发就扩展

    压测

    ab -c 100 -n 50000 http://10.102.237.99/index.html

  • 相关阅读:
    [bzoj 2460]线性基+贪心+证明过程
    [Wc2011] Xor
    [BZOJ2844]线性基+xor本质不同第K大
    洛谷3857 [TJOI2008]彩灯
    HDU3949 异或线性基
    hdu3062 party --2-sat
    KM算法详解+模板
    Hopcroft-Karp算法
    bzoj 1135: [POI2009]Lyz
    hall定理的证明
  • 原文地址:https://www.cnblogs.com/leiwenbin627/p/11361518.html
Copyright © 2020-2023  润新知