• 使用 Kubeadm 升级 Kubernetes 版本


    升级最新版 kubelet kubeadm kubectl (阿里云镜像)

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    setenforce 0
    yum install -y kubelet kubeadm kubectl
    systemctl enable kubelet && systemctl start kubelet

    查看此版本的容器镜像版本

    $ kubeadm config images list
    k8s.gcr.io/kube-apiserver:v1.12.2
    k8s.gcr.io/kube-controller-manager:v1.12.2
    k8s.gcr.io/kube-scheduler:v1.12.2
    k8s.gcr.io/kube-proxy:v1.12.2
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/coredns:1.2.2

    查询可用的版本

    $ yum list --showduplicates | grep 'kubeadm|kubectl|kubelet'

    拉取容器镜像

    $ touch pull_k8s_images.sh
    
    #!/bin/bash
    images=(kube-proxy:v1.12.2 kube-scheduler:v1.12.2 kube-controller-manager:v1.12.2
    kube-apiserver:v1.12.2 kubernetes-dashboard-amd64:v1.10.0 
    etcd:3.2.24 coredns:1.2.2 pause:3.1 )
    for imageName in ${images[@]} ; do
    docker pull anjia0532/google-containers.$imageName
    docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
    docker rmi anjia0532/google-containers.$imageName
    done
    
    $ sh pull_k8s_images.sh

    # docker pull gcrxio/kubernetes-dashboard-amd64:v1.10.0
    
    #!/bin/bash
    images=(kube-proxy:v1.13.0 kube-scheduler:v1.13.0 kube-controller-manager:v1.13.0
    kube-apiserver:v1.13.0 kubernetes-dashboard-amd64:v1.10.0 
    etcd:3.2.24 coredns:1.2.6 pause:3.1 )
    for imageName in ${images[@]} ; do
    docker pull gcrxio/$imageName
    docker tag gcrxio/$imageName k8s.gcr.io/$imageName
    docker rmi gcrxio/$imageName
    done

    其他同步镜像源:

    查询需要升级的信息

    $ kubeadm upgrade plan

    Node 节点升级

    升级对应的 kubelet kubeadm kubectl 的版本,拉取对应版本的镜像即可。

    升级 Master 节点

    $ kubeadm upgrade apply v1.12.2
    [root@kubernetes-master opt]# kubeadm upgrade plan
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.12.1
    [upgrade/versions] kubeadm version: v1.12.2
    [upgrade/versions] Latest stable version: v1.12.2
    [upgrade/versions] Latest version in the v1.12 series: v1.12.2
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     3 x v1.12.1   v1.12.2
    
    Upgrade to the latest version in the v1.12 series:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.12.1   v1.12.2
    Controller Manager   v1.12.1   v1.12.2
    Scheduler            v1.12.1   v1.12.2
    Kube Proxy           v1.12.1   v1.12.2
    CoreDNS              1.2.2     1.2.2
    Etcd                 3.2.24    3.2.24
    
    You can now apply the upgrade by executing the following command:
    
            kubeadm upgrade apply v1.12.2
    
    _____________________________________________________________________
    
    [root@kubernetes-master opt]# kubeadm upgrade apply v1.12.2
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
    [upgrade/version] You have chosen to change the cluster version to "v1.12.2"
    [upgrade/versions] Cluster version: v1.12.1
    [upgrade/versions] kubeadm version: v1.12.2
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.2"...
    Static pod: kube-apiserver-021rjsh216048s hash: 89c5b90341989424b0e19818b3f6e2be
    Static pod: kube-controller-manager-021rjsh216048s hash: 8813655e72b0b808698db6f851c08ab4
    Static pod: kube-scheduler-021rjsh216048s hash: 3c7cced3664379c67e8177757da3fa42
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-scheduler.yaml"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    Static pod: kube-apiserver-021rjsh216048s hash: 89c5b90341989424b0e19818b3f6e2be
    Static pod: kube-apiserver-021rjsh216048s hash: 37239b196c489a9b62de0bd5d3fa7fab
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    Static pod: kube-controller-manager-021rjsh216048s hash: 8813655e72b0b808698db6f851c08ab4
    Static pod: kube-controller-manager-021rjsh216048s hash: c30d65a55ad9ffe79d5d36185a605db2
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
    Static pod: kube-scheduler-021rjsh216048s hash: 3c7cced3664379c67e8177757da3fa42
    Static pod: kube-scheduler-021rjsh216048s hash: ee7b1077c61516320f4273309e9b4690
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "021rjsh216048s" as an annotation
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.2". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

    查询各节点信息与 pod 信息

    $ systemctl enable kubelet && systemctl restart #各节点依次重启
    $ kubelet$ kubectl get nodes
    $ kubectl get pod --all-namespaces -o wide

    REFER:
    https://my.oschina.net/u/2306127/blog/2231184
    https://github.com/kubernetes/kubeadm/issues/1054

  • 相关阅读:
    ES正常停止步骤
    有效的域名后缀列表
    sc.textFile("file:///home/spark/data.txt") Input path does not exist解决方法——submit 加参数 --master local 即可解决
    Spark技术在京东智能供应链预测的应用——按照业务进行划分,然后利用scikit learn进行单机训练并预测
    SaltStack介绍——SaltStack是一种新的基础设施管理方法开发软件,简单易部署,可伸缩的足以管理成千上万的服务器,和足够快的速度控制,与他们交流
    英特尔深度学习框架BigDL——a distributed deep learning library for Apache Spark
    宠物乘机的三种模式【转】
    机器学习特征表达——日期与时间特征做离散处理(数字到分类的映射),稀疏类分组(相似特征归档),创建虚拟变量(提取新特征) 本质就是要么多变少,或少变多
    域名解析举例
    什么是域名的TTL值? ——一条域名解析记录在DNS缓存服务器中的存留时间
  • 原文地址:https://www.cnblogs.com/Irving/p/9836051.html
Copyright © 2020-2023  润新知