• 7.k8s.调度器scheduler 亲和性、污点


    #k8s. 调度器scheduler 亲和性、污点

    默认调度过程:预选 Predicates (过滤节点) --> 优选 Priorities(优先级排序) --> 优先级最高节点
    实际使用,根据需求控制Pod调度,需要用到如下:
    指定节点、nodeAffinity(节点亲和性)、podAffinity(pod 亲和性)、 podAntiAffinity(pod 反亲和性)

    #指定调度节点

    # Pod.spec.nodeName 强制匹配,直接指定Node 节点,跳过 Scheduler 调度策略
    #node-name-demo.yaml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: demo-nodename
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: demo1
        spec:
          nodeName: node03  #指定Node节点
          containers:
          - name: demo1
            image: alivv/nginx:node
            ports:
            - containerPort: 80
    
    #部署
    kubectl apply -f node-name-demo.yaml
    #查看pod全部在node03上 (node03节点不存在会一直处于pending)
    kubectl get pod -o wide
    #删除
    kubectl delete -f node-name-demo.yaml
    
    # Pod.spec.nodeSelector 强制约束,调度策略匹配 label,调度 Pod 到目标节点
    #node-selector-demo.yaml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: demo-node-selector
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: demo1
        spec:
          nodeSelector:
            test1: node  #匹配lable test1=node
          containers:
          - name: demo1
            image: alivv/nginx:node
            ports:
            - containerPort: 80
    
    #部署
    kubectl apply -f node-selector-demo.yaml
    
    #查看pod处于pending
    kubectl get pod -o wide
    #给node02节点添加lable
    kubectl label nodes node02 test1=node
    kubectl get nodes --show-labels
    #再次查看pod在node02节点
    kubectl get pod -o wide
    
    #删除
    kubectl delete -f node-selector-demo.yaml
    kubectl label nodes node02 test1-
    

    亲和性调度

    亲和性调度可以分成软策略硬策略两种方式

    • preferredDuringSchedulingIgnoredDuringExecution:软策略,没满足条件就忽略,Pod可以启动
    • requiredDuringSchedulingIgnoredDuringExecution:硬策略,没满足条件就等待,Pod处于Pending

    操作符

    • In:label 的值在某个列表中
    • NotIn:label 的值不在某个列表中
    • Gt:label 的值大于某个值
    • Lt:label 的值小于某个值
    • Exists:某个 label 存在
    • DoesNotExist:某个 label 不存在

    #节点亲和性 pod.spec.nodeAffinity

    #node-affinity-demo.yaml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: node-affinity
      labels:
        app: affinity
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: affinity
        spec:
          containers:
          - name: nginx
            image: alivv/nginx:node
            ports:
            - containerPort: 80
              name: nginxweb
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:  #硬策略,不在node01节点
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: NotIn
                    values:
                    - node01
              preferredDuringSchedulingIgnoredDuringExecution:  #软策略,优先匹配test2=node
              - weight: 1
                preference:
                  matchExpressions:
                  - key: test2
                    operator: In
                    values:
                    - node
    
    #给node03节点添加lable
    kubectl label nodes node03 test2=node
    kubectl get nodes --show-labels
    
    #部署
    kubectl apply -f node-affinity-demo.yaml
    
    #查看pod
    kubectl get pod -o wide
    
    #删除
    kubectl delete -f node-affinity-demo.yaml
    kubectl label nodes node03 test2-
    

    #Pod亲和性pod.spec.affinity.podAffinity/podAntiAffinity

    podAffinity Pod亲和性,解决 pod 部署在同一个拓扑域 、或同一个节点
    podAntiAffinity Pod反亲和性,避开Pod部署在一起

    #pod-affinity-demo.yaml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: pod-affinity-demo
      labels:
        app: pod-affinity
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: pod-affinity
        spec:
          containers:
          - name: nginx
            image: alivv/nginx:node
            ports:
            - containerPort: 80
              name: nginxweb
          affinity:
            podAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:  #硬策略
              - labelSelector:  #匹配Pod有 app=demo1
                  matchExpressions:
                  - key: app
                    operator: In
                    values:
                    - demo1
                topologyKey: kubernetes.io/hostname
    
    #部署
    kubectl apply -f pod-affinity-demo.yaml
    
    #查看pod全部处于Pending 因为没标签app=demo1的Pod
    kubectl get pod -o wide
    
    #部署上面的node-name-demo.yaml
    kubectl apply -f  node-name-demo.yaml
    #再次查看pod全部在node03节点
    kubectl get pod -o wide
    
    #Pod反亲和性测试
    #改podAffinity为podAntiAffinity
    sed -i 's/podAffinity/podAntiAffinity/' pod-affinity-demo.yaml
    kubectl apply -f pod-affinity-demo.yaml
    
    #查看node03节点移除pod-affinity-demo
    kubectl get pod -o wide
    
    #删除
    kubectl delete -f pod-affinity-demo.yaml
    kubectl delete -f node-name-demo.yaml
    

    # 污点taints与容忍tolerations

    节点标记为 Taints ,除非 pod可以容忍污点节点,否则该 Taints 节点不会被调度pod
    kubeadm安装k8s,默认master节点会添加NoSchedule 类型污点

    污点设置 kubectl taint nodes node-name key=value:effect
    key 和 value 为污点标签, value 可以为空,effect 描述污点的作用,effect 支持如下三个选项:

    • NoSchedule 不会将 Pod 调度到有污点的 Node
    • PreferNoSchedule 避免将 Pod 调度到有污点的 Node
    • NoExecute 不会将 Pod 调度到有污点的 Node ,将已经存在的 Pod 驱逐出去
    #给node03添加污点
    kubectl taint nodes node03 test-taint=node:NoSchedule
    #查看
    kubectl describe node node03 |grep Taints
    

    容忍 pod.spec.tolerations

    #pod-tolerations-demo.yaml
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: pod-tolerations-demo
      labels:
        app: pod-tolerations
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: pod-tolerations
        spec:
          containers:
          - name: nginx
            image: alivv/nginx:node
            ports:
            - containerPort: 80
              name: http
    
    #创建Pod
    kubectl apply -f pod-tolerations-demo.yaml
    
    #查看Pod,node03节点有污点将不会创建Pod
    kubectl get pod -o wide
    
    #文件pod-tolerations-demo.yaml追加容忍污点配置
    echo '#容忍污点
          tolerations:
          - key: "test-taint"
            #value: "node"
            operator: "Exists"  #忽略value值
            effect: "NoSchedule"
    '>>pod-tolerations-demo.yaml
    cat pod-tolerations-demo.yaml
    
    #更新
    kubectl apply -f pod-tolerations-demo.yaml
    #Pod扩容
    kubectl scale deployment pod-tolerations-demo --replicas 5
    
    #再次查看Pod,Node03有Pod
    kubectl get pod -o wide
    
    #删除Pod
    kubectl delete -f pod-tolerations-demo.yaml
    #删除污点
    kubectl taint nodes node03 test-taint-
    

    #不指定 key 值时,容忍所有的污点 key

    tolerations:
    - operator: "Exist
    

    #不指定 effect 值时,表示容忍所有的污点作用

    tolerations:
    - key: "key"
      operator: "Exist"
    

    #避免资源浪费,设置master运行Pod,可以如下配置:

    kubectl taint nodes --all  node-role.kubernetes.io/master- #先删除默认污点
    kubectl taint nodes Node-Name node-role.kubernetes.io/master=:PreferNoSchedule
    

    Blog地址 https://www.cnblogs.com/elvi/p/11755828.html
    本文git地址 https://gitee.com/alivv/k8s/tree/master/notes

  • 相关阅读:
    SQL练习(Navicat premium)
    jmeter Thread Name 后面数字1-1 1-2的意思
    jmeter用Stepping Thread Group 递增并发数
    打开文件提示“已被macos使用“,不用每次都设置一遍
    查看访问网页的接口
    mac常用快捷键
    mac修改hosts保存报错
    文本编辑器
    excel时间戳转化为日期
    jmeter察看结果树左侧的请求名称显示为空 开始时间显示1970-01-01
  • 原文地址:https://www.cnblogs.com/elvi/p/11755828.html
Copyright © 2020-2023  润新知