• k8s下线node节点


    有时候我们想更新一些服务器,或者维护/更新一些镜像的时候,需要暂停一部分node节点,正确操作步骤如下:

    1.获取节点列表

    root@k8s-master1:~# kubectl get nodes -o wide
    NAME          STATUS   ROLES                  AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
    k8s-master1   Ready    control-plane,master   103d   v1.22.0   192.168.255.100   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-master2   Ready    control-plane,master   103d   v1.22.0   192.168.255.102   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-node1     Ready    <none>                 103d   v1.22.0   192.168.255.103   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-node2     Ready    <none>                 103d   v1.22.0   192.168.255.104   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    

    2.设置不可调度

    root@k8s-master1:~# kubectl cordon k8s-node2 
    node/k8s-node2 cordoned
    root@k8s-master1:~# kubectl get nodes -o wide
    NAME          STATUS                     ROLES                  AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
    k8s-master1   Ready                      control-plane,master   103d   v1.22.0   192.168.255.100   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-master2   Ready                      control-plane,master   103d   v1.22.0   192.168.255.102   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-node1     Ready                      <none>                 103d   v1.22.0   192.168.255.103   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-node2     Ready,SchedulingDisabled   <none>                 103d   v1.22.0   192.168.255.104   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    root@k8s-master1:~#
    

    可以看到k8s-node2节点已经被暂停调度(SchedulingDisabled)

    3.驱逐节点上运行的Pod

    如果需要驱逐节点上的pod 可执行

    root@k8s-master1:~# kubectl drain k8s-node2 
    node/k8s-node2 already cordoned
    DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
    The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
    For now, users can try such experience via: --ignore-errors
    error: unable to drain node "k8s-node2", aborting command...
    
    There are pending nodes to be drained:
     k8s-node2
    error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-wdwzs, kube-system/kube-proxy-dk4vn
    root@k8s-master1:~#
    

    注意:若node节点上存在daemonsets控制器创建的pod,则需要使用--ignore-daemonsets忽略错误错误警告

    root@k8s-master1:~# kubectl drain k8s-node2 --ignore-daemonsets
    node/k8s-node2 already cordoned
    WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-wdwzs, kube-system/kube-proxy-dk4vn
    node/k8s-node2 drained
    root@k8s-master1:~# 
    

    查看pod,可以看到已经没有pod运行在k8s-node2节点

    root@k8s-master1:~# kubectl get pod -o wide
    NAME                             READY   STATUS    RESTARTS       AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    php-fpm-nginx-5d8b7fd989-59ljf   2/2     Running   0              91m   10.10.3.243   k8s-node1   <none>           <none>
    php-fpm-nginx-5d8b7fd989-kfmln   2/2     Running   22 (22h ago)   24d   10.10.3.238   k8s-node1   <none>           <none>
    redis-svc-866fb777f8-4ps2s       1/1     Running   11 (22h ago)   24d   10.10.3.240   k8s-node1   <none>           <none>
    redis-svc-866fb777f8-lflkv       1/1     Running   11 (22h ago)   24d   10.10.3.239   k8s-node1   <none>           <none>
    test-www-7fcc4d5595-4mg5j        2/2     Running   0              91m   10.10.3.242   k8s-node1   <none>           <none>
    test-www-7fcc4d5595-kcllj        2/2     Running   0              91m   10.10.3.241   k8s-node1   <none>           <none>
    

    4、恢复可调度
    当需要重新调度node节点的时候,执行

    root@k8s-master1:~# kubectl uncordon k8s-node2
    node/k8s-node2 uncordoned
    

    查看node状态

    root@k8s-master1:~# kubectl get nodes -o wide
    NAME          STATUS   ROLES                  AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
    k8s-master1   Ready    control-plane,master   103d   v1.22.0   192.168.255.100   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-master2   Ready    control-plane,master   103d   v1.22.0   192.168.255.102   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-node1     Ready    <none>                 103d   v1.22.0   192.168.255.103   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    k8s-node2     Ready    <none>                 103d   v1.22.0   192.168.255.104   <none>        Ubuntu 18.04.5 LTS   4.15.0-154-generic   docker://20.10.8
    root@k8s-master1:~#
    

    5、删除节点
    如果需要直接删除节点(此方式比较暴力,一般情况不推荐使用)
    kubectl delete nodes k8s-node2
    如果删除的node想重新加入到k8s集群中 则需要master重新生成加入token即可

  • 相关阅读:
    VTK 9.0.1 vtkContextDevice2D 问题
    VTK 中文
    VTK 剪切
    VTK Color Map
    VTK Camera
    VTK Light
    VTK Read Source Object
    VTK Procedural Source Object
    Qt 布局开发问题记录
    Grafana 系列 (7):圖表是否可以数据追踪 (drill down)?(转)
  • 原文地址:https://www.cnblogs.com/eddie1127/p/15152660.html
Copyright © 2020-2023  润新知