• 0.4kubeadm参数说明


    kubeadm参数说明

    kubeadm
    alpha 处于测试阶段的命令
    completion 设置命令补全
    config 管理kubeadm集群的配置,该配置保留在集群的configmap中
    help 帮助
    init 启动一个kubernetes主节点
    join 将节点加入已经存在的集群
    reset 还原使用kubeadm init 或者kubeamd join对系统产生的变化
    token 管理token
    upgrade 升级k8s版本
    version 查看版本信息

    选项

    参数 说明
    --apiserver-advertise-address string API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。
    --apiserver-bind-port int32 API 服务器绑定的端口。默认值:6443
    --apiserver-cert-extra-sans stringSlice 用于 API Server 服务证书的可选附加主题备用名称(SAN)。可以是 IP 地址和 DNS 名称。
    --cert-dir string 默认值:"/etc/kubernetes/pki" 保存和存储证书的路径。
    --certificate-key string 用于加密 kubeadm-certs Secret 中的控制平面证书的密钥。
    --config string kubeadm 配置文件的路径。
    --control-plane-endpoint string 为控制平面指定一个稳定的 IP 地址或 DNS 名称。即配置一个可以长期使用是高可用的VIP或域名。
    --cri-socket string 要连接的 CRI 套接字的路径。如果为空,则 kubeadm 将尝试自动检测此值;仅当安装了多个 CRI 或具有非标准 CRI 插槽时,才使用此选项。
    --dry-run 不要应用任何更改;只是输出将要执行的操作。
    --feature-gates string 一组用来描述各种功能特性的键值(key=value)对。选项是: IPv6DualStack=true
    --ignore-preflight-errors stringSlice 例如:'IsPrivilegedUser,Swap'。取值为 'all' 时将忽略检查中的所有错误。
    --image-repository string 默认值:"k8s.gcr.io" 选择用于拉取控制平面镜像的容器仓库
    --kubernetes-version string 默认值:"stable-1" 为控制平面选择一个特定的 Kubernetes 版本。
    --node-name string 指定节点的名称。
    --pod-network-cidr string 指明 pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDRs。
    --service-cidr string 默认值:"10.96.0.0/12" 为service的虚拟 IP 地址另外指定 IP 地址段
    --service-dns-domain string 默认值:"cluster.local" 为服务另外指定域名,例如:"myorg.internal"。
    --skip-certificate-key-print 不要打印用于加密控制平面证书的密钥。
    --skip-phases stringSlice 要跳过的阶段列表
    --skip-token-print 跳过打印 'kubeadm init' 生成的默认引导令牌。
    --token string 这个令牌用于建立控制平面节点与工作节点间的双向通信。格式为 [a-z0-9]{6}.[a-z0-9]{16} - 示例:abcdef.0123456789abcdef
    --token-ttl duration 默认值:24h0m0s 令牌被自动删除之前的持续时间(例如 1 s,2 m,3 h)。如果设置为 '0',则令牌将永不过期
    --upload-certs 将控制平面证书上传到 kubeadm-certs Secret。

    Init 命令的工作流程

    kubeadm init 命令通过执行下列步骤来启动一个 Kubernetes 控制平面节点。

    1. 在做出变更前运行一系列的预检项来验证系统状态。一些检查项目仅仅触发警告, 其它的则会被视为错误并且退出 kubeadm,除非问题得到解决或者用户指定了 --ignore-preflight-errors=<错误列表> 参数。
    2. 生成一个自签名的 CA 证书来为集群中的每一个组件建立身份标识。 用户可以通过将其放入 --cert-dir 配置的证书目录中(默认为 /etc/kubernetes/pki) 来提供他们自己的 CA 证书以及/或者密钥。 APIServer 证书将为任何 --apiserver-cert-extra-sans 参数值提供附加的 SAN 条目,必要时将其小写。
    3. 将 kubeconfig 文件写入 /etc/kubernetes/ 目录以便 kubelet、控制器管理器和调度器用来连接到 API 服务器,它们每一个都有自己的身份标识,同时生成一个名为 admin.conf 的独立的 kubeconfig 文件,用于管理操作。
    4. 为 API 服务器、controller manager和scheduler生成静态 Pod 的manifest文件。假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。静态 Pod 的清单文件被写入到 /etc/kubernetes/manifests 目录; kubelet 会监视这个目录以便在系统启动的时候创建 Pod。一旦控制平面的 Pod 都运行起来, kubeadm init 的工作流程就继续往下执行。
    5. 对node节点标记label和taint;以便不会在它上面运行其它的工作负载。
    6. 生成token以让额外node可使用该令牌向控制平面注册自己。 如 kubeadm token 文档所述, 用户可以选择通过 --token 提供令牌。
    7. 为了使得节点能够遵照启动引导令牌TLS 启动引导 这两份文档中描述的机制加入到集群中,kubeadm 会执行所有的必要配置:
    • 创建一个 ConfigMap 提供添加集群节点所需的信息,并为该 ConfigMap 设置相关的 RBAC 访问规则。
    • 允许启动引导令牌访问 CSR 签名 API。
    • 配置自动签发新的 CSR 请求。

    更多相关信息,请查看 kubeadm join

    1. 通过 API 服务器安装一个 DNS 服务器 (CoreDNS) 和 kube-proxy 附加组件。 在 Kubernetes 版本 1.11 和更高版本中,CoreDNS 是默认的 DNS 服务器。 要安装 kube-dns 而不是 CoreDNS,必须在 kubeadm ClusterConfiguration 中配置 DNS 插件。 有关配置的更多信息,请参见下面的"带配置文件使用 kubeadm init" 一节。 请注意,尽管已部署 DNS 服务器,但直到安装 CNI 时才调度它。

    警告: 从 v1.18 开始,在 kubeadm 中使用 kube-dns 已废弃,并将在以后的版本中将其删除

    k8s集群升级

    升级k8s集群必须先升级kubeadm版本到目标的k8s版本,也就是说kubeadm是k8s升级的“准生证”

    升级k8s master服务

    #检验当前k8s版本
    kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:54:01Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    #查看可用源
    apt-cache madison kubeadm
    #进行安装
    apt install kubeadm=1.18.15-00
    #查看升级计划
    kubeadm upgrade  plan
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Running cluster health checks
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.18.0
    [upgrade/versions] kubeadm version: v1.18.15
    I0131 22:51:12.699531   59939 version.go:252] remote version is much newer: v1.20.2; falling back to: stable-1.18
    [upgrade/versions] Latest stable version: v1.18.15
    [upgrade/versions] Latest stable version: v1.18.15
    [upgrade/versions] Latest version in the v1.18 series: v1.18.15
    [upgrade/versions] Latest version in the v1.18 series: v1.18.15
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     6 x v1.18.9   v1.18.15
    
    Upgrade to the latest version in the v1.18 series:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.18.0   v1.18.15
    Controller Manager   v1.18.0   v1.18.15
    Scheduler            v1.18.0   v1.18.15
    Kube Proxy           v1.18.0   v1.18.15
    CoreDNS              1.6.7     1.6.7
    Etcd                 3.4.3     3.4.3-0
    
    You can now apply the upgrade by executing the following command:
    
    	kubeadm upgrade apply v1.18.15
    
    _____________________________________________________________________
    #准备升级;最好提前准备好镜像
    kubeadm upgrade apply v1.18.15
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Running cluster health checks
    [upgrade/version] You have chosen to change the cluster version to "v1.18.15"
    [upgrade/versions] Cluster version: v1.18.0
    [upgrade/versions] kubeadm version: v1.18.15
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.15"...
    Static pod: kube-apiserver-kubeadm-master1 hash: 314026e401872d5847b47665a21ccf3f
    Static pod: kube-controller-manager-kubeadm-master1 hash: b1fa2b781e902ea7b52f45d7df09bb94
    Static pod: kube-scheduler-kubeadm-master1 hash: c26311817f3004db2d16fe7c7aa210e6
    [upgrade/etcd] Upgrading to TLS for etcd
    [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.15" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests009679931"
    W0131 22:58:33.974938   60872 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Renewing apiserver certificate
    [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
    [upgrade/staticpods] Renewing front-proxy-client certificate
    [upgrade/staticpods] Renewing apiserver-etcd-client certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-31-22-58-32/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-kubeadm-master1 hash: 314026e401872d5847b47665a21ccf3f
    Static pod: kube-apiserver-kubeadm-master1 hash: 18932e05b9d1bf2ffc370bfef1026a5d
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Renewing controller-manager.conf certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-31-22-58-32/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-kubeadm-master1 hash: b1fa2b781e902ea7b52f45d7df09bb94
    Static pod: kube-controller-manager-kubeadm-master1 hash: ad3b9f4161c26ffce9687912afece5eb
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Renewing scheduler.conf certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-31-22-58-32/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-kubeadm-master1 hash: c26311817f3004db2d16fe7c7aa210e6
    Static pod: kube-scheduler-kubeadm-master1 hash: 51c17337156d8bd02f716c120687fc59
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.15". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
    
    #安装最新版kubelet kubectl
    apt-cache madison kubelet 
    apt-cache madison kubectl
    #并安装最新版本
    apt install -y kubelet=1.18.15-00 kubectl=1.18.15-00
    #验证服务
    kubectl get node
    NAME              STATUS   ROLES    AGE   VERSION
    kubeadm-master1   Ready    master   8h    v1.18.15
    kubeadm-master2   Ready    master   8h    v1.18.15
    kubeadm-master3   Ready    master   8h    v1.18.15
    kubeadm-node01    Ready    worker   8h    v1.18.9
    kubeadm-node02    Ready    worker   8h    v1.18.9
    kubeadm-node03    Ready    worker   8h    v1.18.9
    
    

    升级k8s node服务

    #为每个node节点升级kubelete
    kubeadm upgrade node --kubelet-version v1.18.15
    Flag --kubelet-version has been deprecated, This flag is deprecated and will be removed in a future version.
    [upgrade] Reading configuration from the cluster...
    [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.18.15"...
    Static pod: kube-apiserver-kubeadm-master1 hash: 18932e05b9d1bf2ffc370bfef1026a5d
    Static pod: kube-controller-manager-kubeadm-master1 hash: ad3b9f4161c26ffce9687912afece5eb
    Static pod: kube-scheduler-kubeadm-master1 hash: 51c17337156d8bd02f716c120687fc59
    [upgrade/etcd] Upgrading to TLS for etcd
    [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.15" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests117610871"
    W0131 23:14:23.148870   74890 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
    [upgrade] The control plane instance for this node was successfully updated!
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
    #安装kubelet包
    apt install -y kubeadm=1.18.15-00 kubelet=1.18.15-00
    
    kubectl get node
    NAME              STATUS   ROLES    AGE   VERSION
    kubeadm-master1   Ready    master   8h    v1.18.15
    kubeadm-master2   Ready    master   8h    v1.18.15
    kubeadm-master3   Ready    master   8h    v1.18.15
    kubeadm-node01    Ready    worker   8h    v1.18.15
    kubeadm-node02    Ready    worker   8h    v1.18.15
    kubeadm-node03    Ready    worker   8h    v1.18.15
    

    kubeadm token

    在新节点没有拿到证书以前,新节点和api server的通信是通过token和ca的签名完成的,具体的步骤如下

    # 生成token
    kubeadm  token create
    kiyfhw.xiacqbch8o8fa8qj
    #查看token
    kubeadm  token list
    # 生成ca的sha256 hash值
    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    5417eb1b68bd4e7a4c82aded83abc55ec91bd601e45734d6aba85de8b1ebb057
    # 组装join命令
    kubeadm join 18.16.202.35:6443 --token kiyfhw.xiacqbch8o8fa8qj --discovery-token-ca-cert-hash sha256:5417eb1b68bd4e7a4c82aded83abc55ec91bd601e45734d6aba85de8b1ebb057
    # 一步完成以上步骤
    kubeadm token create --print-join-command
    # 手动生成token,完成命令打印
    token=$(kubeadm token generate)
    kubeadm token create $token --print-join-command --ttl=0
    
  • 相关阅读:
    xgqfrms™, xgqfrms® : xgqfrms's offical website of GitHub!
    xgqfrms™, xgqfrms® : xgqfrms's offical website of GitHub!
    SpringCloud Alibaba微服务实战一
    Bar:柱状图/条形图
    Pie:饼图
    Gauge:仪表盘
    Funnel:漏斗图
    VSCode+latex引用bibtex参考文献
    因为报表做得太好,我被阎王爷叫走了.....
    ubuntu安装pyCUDA
  • 原文地址:https://www.cnblogs.com/Gmiaomiao/p/14354661.html
Copyright © 2020-2023  润新知