• k8s标签label


    1、给节点设置标签 一遍pod部署选择

    kubectl label node 节点名 disktype=ssd
    kubectl label node master1 disktype=ssd
    
    效果
    [root@master1 ~]# kubectl get nodes --show-labels
    NAME      STATUS   ROLES                  AGE     VERSION   LABELS
    master1   Ready    control-plane,master   2d14h   v1.20.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
    node1     Ready    <none>                 2d14h   v1.20.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux

    2、给service设置标签 可以设置多次 通过 key=value的方式 但key不能一样   

    kubectl label services http-svc http=v1
    
    效果
    
    [root@master1 ~]# kubectl get services  --show-labels
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE     LABELS
    http-svc     NodePort    172.30.169.160   <none>        8080:30001/TCP   38h     http=v1
    kubernetes   ClusterIP   172.30.0.1       <none>        443/TCP          2d14h   component=apiserver,provider=kubernetes
    nginx        ClusterIP   None             <none>        80/TCP           14h     app=nginx
    [root@master1 ~]#

    3、覆盖修改可以用  --overwrite参数

    kubectl label services http-svc http=v2 --overwrite
  • 相关阅读:
    Ambari Server 架构
    [Spark]-源码解析-RDD之transform
    [Spark]-源码解析-RDD的五大特征体现
    [Spark]-作业调度与动态资源分配
    [Spark]-集群与日志监控
    [Spark]-Streaming-调优
    [Spark]-调优
    [Spark]-Streaming-Persist与CheckPoint
    [Spark]-Streaming-输出
    [Spark]-Streaming-操作
  • 原文地址:https://www.cnblogs.com/xlovepython/p/14480477.html
Copyright © 2020-2023  润新知