• Kubernetes master服务定制编译docker镜像


    前言

      之前部署了Kubernetes 1.13.0,发现master服务的启动方式与1.10.4版本有所区别,kube-apiserver、kube-controller-manager和kube-scheduler分别使用不同的镜像启动,而不再是公用一个hyperkube镜像。但是官方的 kube-controller-manager镜像中不包含ceph client,导致无法创建RBD volume。于是需要打包自定义镜像,安装ceph client。

    1. 环境

      系统:CentOS 7.2

      Docker:18.03.1-ce

      Kubernetes:1.13.0

    2. 下载Kubernetes源码

      使用git clone –b参数下载对应版本的源码:

    # git clone -b v1.13.0 https://github.com/kubernetes/kubernetes.git

    3. 下载base镜像

      打包过程使用到的base镜像有:

    k8s.gcr.io/kube-cross:v1.11.2-1
    k8s.gcr.io/pause-amd64:3.1
    k8s.gcr.io/debian-base-amd64:0.4.0
    k8s.gcr.io/debian-iptables-amd64:v11.0
    k8s.gcr.io/debian-hyperkube-base-amd64:0.12.0

      这些镜像可以使用make release命令从源码编译生成,为了方便,我直接使用官方编译好的镜像。我将它们下载下来上传到了私有镜像仓库,方便在不同机器上打包。

    4. 修改镜像地址

      由于使用本地镜像仓库,要修改打包脚本中的镜像仓库地址。修改如下:

      build/build-image/Dockerfile:

    # git diff build/build-image/Dockerfile
    diff --git a/build/build-image/Dockerfile b/build/build-image/Dockerfile
    index ff4543b..976a377 100644
    --- a/build/build-image/Dockerfile
    +++ b/build/build-image/Dockerfile
    @@ -13,7 +13,7 @@
     # limitations under the License.
     
     # This file creates a standard build environment for building Kubernetes
    - FROM k8s.gcr.io/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
    +FROM registry.example.com/k8s_gcr/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
     
     # Mark this as a kube-build container
     RUN touch /kube-build-image

      build/common.sh:

    # git diff build/common.sh
    diff --git a/build/common.sh b/build/common.sh
    index b3b7748..902b08f 100755
    --- a/build/common.sh
    +++ b/build/common.sh
    @@ -91,14 +91,15 @@ kube::build::get_docker_wrapped_binaries() {
       local arch=$1
       local debian_base_version=0.4.0
       local debian_iptables_version=v11.0
    +  local registry="registry.example.com/k8s_gcr"
       ### If you change any of these lists, please also update DOCKERIZED_BINARIES
       ### in build/BUILD. And kube::golang::server_image_targets
       local targets=(
    -    cloud-controller-manager,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}"
    -    kube-apiserver,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}"
    -    kube-controller-manager,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}"
    -    kube-scheduler,"k8s.gcr.io/debian-base-${arch}:${debian_base_version}"
    -    kube-proxy,"k8s.gcr.io/debian-iptables-${arch}:${debian_iptables_version}"
    +    cloud-controller-manager,"${registry}/debian-base-${arch}:${debian_base_version}"
    +    kube-apiserver,"${registry}/debian-base-${arch}:${debian_base_version}"
    +    kube-controller-manager,"${registry}/debian-base-${arch}:${debian_base_version}"
    +    kube-scheduler,"${registry}/debian-base-${arch}:${debian_base_version}"
    +    kube-proxy,"${registry}/debian-iptables-${arch}:${debian_iptables_version}"
       )
     
       echo "${targets[@]}"

      build/root/WORKSPACE:

    # git diff build/root/WORKSPACE
    diff --git a/build/root/WORKSPACE b/build/root/WORKSPACE
    index cee8962..f1a7c37 100644
    --- a/build/root/WORKSPACE
    +++ b/build/root/WORKSPACE
    @@ -71,7 +71,7 @@ http_file(
     docker_pull(
         name = "debian-base-amd64",
         digest = "sha256:86176bc8ccdc4d8ea7fbf6ba4b57fcefc2cb61ff7413114630940474ff9bf751",
    -    registry = "k8s.gcr.io",
    +    registry = "registry.example.com/k8s_gcr",
         repository = "debian-base-amd64",
         tag = "0.4.0",  # ignored, but kept here for documentation
     )
    @@ -79,7 +79,7 @@ docker_pull(
     docker_pull(
         name = "debian-iptables-amd64",
         digest = "sha256:d4ff8136b9037694a3165a7fff6a91e7fc828741b8ea1eda226d4d9ea5d23abb",
    -    registry = "k8s.gcr.io",
    +    registry = "registry.example.com/k8s_gcr",
         repository = "debian-iptables-amd64",
         tag = "v11.0",  # ignored, but kept here for documentation
     )
    @@ -87,7 +87,7 @@ docker_pull(
     docker_pull(
         name = "debian-hyperkube-base-amd64",
         digest = "sha256:4a77bc882f7d629c088a11ff144a2e86660268fddf63b61f52b6a93d16ab83f0",
    -    registry = "k8s.gcr.io",
    +    registry = "registry.example.com/k8s_gcr",
         repository = "debian-hyperkube-base-amd64",
         tag = "0.12.0",  # ignored, but kept here for documentation
     )

      build/lib/release.sh:

    # git diff build/lib/release.sh
    diff --git a/build/lib/release.sh b/build/lib/release.sh
    index d7ccc01..47d9e37 100644
    --- a/build/lib/release.sh
    +++ b/build/lib/release.sh
    @@ -327,7 +327,7 @@ function kube::release::create_docker_images_for_server() {
         local images_dir="${RELEASE_IMAGES}/${arch}"
         mkdir -p "${images_dir}"
     
    -    local -r docker_registry="k8s.gcr.io"
    +    local -r docker_registry="registry.example.com/k8s_gcr"
         # Docker tags cannot contain '+'
         local docker_tag="${KUBE_GIT_VERSION/+/_}"
         if [[ -z "${docker_tag}" ]]; then

    5. kube-controller-manager安装ceph client

      Kubernetes master服务镜像的build在build/lib/release.sh的kube::release::create_docker_images_for_server()中进行,它们的Dockerfile也是由这个函数动态生成的。修改这个函数,在Dockerfile中增加上安装ceph client的命令。

      由于使用的base镜像k8s.gcr.io/debian-base-amd64:0.4.0是Debian 9.5 Stretch版本的,ceph mimic使用C++ 17标准,需要gcc 8支持,所以不支持Debian Stretch。于是只能安装ceph luminous版本的client。当然也可以使用第三方编译的deb包,比如croit

      在步骤四的修改上继续修改build/lib/release.sh:

    # git diff build/lib/release.sh
    diff --git a/build/lib/release.sh b/build/lib/release.sh
    index d7ccc01..0d03da9 100644
    --- a/build/lib/release.sh
    +++ b/build/lib/release.sh
    @@ -327,7 +327,7 @@ function kube::release::create_docker_images_for_server() {
         local images_dir="${RELEASE_IMAGES}/${arch}"
         mkdir -p "${images_dir}"
     
    -    local -r docker_registry="k8s.gcr.io"
    +    local -r docker_registry="registry.example.com/k8s_gcr"
         # Docker tags cannot contain '+'
         local docker_tag="${KUBE_GIT_VERSION/+/_}"
         if [[ -z "${docker_tag}" ]]; then
    @@ -370,11 +370,21 @@ function kube::release::create_docker_images_for_server() {
             cat <<EOF > "${docker_file_path}"
     FROM ${base_image}
     COPY ${binary_name} /usr/local/bin/${binary_name}
    +RUN echo "deb http://mirrors.aliyun.com/debian/ stretch main non-free contrib
    deb-src http://mirrors.aliyun.com/debian/ stretch main non-free contrib
    +RUN apt-get update && apt-get -y install apt-transport-https gnupg2 wget curl
     EOF
             # ensure /etc/nsswitch.conf exists so go's resolver respects /etc/hosts
             if [[ "${base_image}" =~ busybox ]]; then
               echo "COPY nsswitch.conf /etc/" >> "${docker_file_path}"
             fi
    +
    +        # install ceph client
    +        if [[ ${binary_name} =~ "kube-controller-manager" ]]; then
    +          echo "RUN wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -" >> "${docker_file_path}"
    +          echo "RUN echo 'deb https://download.ceph.com/debian-luminous/ stretch main' > /etc/apt/sources.list.d/ceph.list" >> "${docker_file_path}"
    +          echo "RUN apt-get update && apt-get install -y ceph-common ceph-fuse" >> "${docker_file_path}"
    +        fi
    +
             "${DOCKER[@]}" build -q -t "${docker_image_tag}" "${docker_build_path}" >/dev/null
             "${DOCKER[@]}" save "${docker_image_tag}" > "${binary_dir}/${binary_name}.tar"
             echo "${docker_tag}" > "${binary_dir}/${binary_name}.docker_tag"

      将所有镜像的源换成了aliyun,并在kube-controller-manager Dockerfile中增加了安装ceph client的命令。

      至此准备工作就完成了,下面开始编译镜像。

    6. 编译kubernetes镜像

      编译命令:

    # cd kubernetes
    # make clean
    # KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images GOFLAGS=-v GOGCFLAGS="-N -l"

      其中KUBE_BUILD_PLATFORMS=linux/amd64指定目标平台为linux/amd64,GOFLAGS=-v开启verbose日志,GOGCFLAGS="-N -l"禁止编译优化和内联,减小可执行程序大小。

    7. 导入镜像

      等待编译完成后,在_output/release-stage/server/linux-amd64/kubernetes/server/bin/目录下保存了编译生成的二进制可执行程序和docker镜像tar包。导入kube-controller-manager镜像,上传到镜像仓库:

    # docker load –i _output/release-stage/server/linux-amd64/kubernetes/server/bin/kube-controller-manager.tar
    # docker tag registry.example.com/k8s_gcr/kube-controller-manager:v1.13.0 registry.example.com/k8s_gcr/kube-controller-manager:v1.13.0-ceph-mimic
    # docker push registry.example.com/k8s_gcr/kube-controller-manager:v1.13.0-ceph-mimic

      整个编译过程结束,现在就可以到master节点上,修改/etc/kubernetes/manifests/kube-controller-manager.yaml描述文件中的image,修改完立即生效,创建RBD PV可以看到效果。

      测试发现kube-controller-manager创建rbd没问题,但是node节点挂载rbd时报错。进一步测试发现与ceph client版本的关系不大,而与kernel的版本有关。将node节点的内核从3.10升级到4.17后,可以正常挂载。

    参考资料

    croit | Debian 9 (Stretch) Ceph Mimic mirror

  • 相关阅读:
    字符编码乱码处理
    字典,元组,集合的使用
    三级菜单 ,求1
    运算符 if和while的使用
    if条件判断和while循环
    基本数据类型的结构和使用方法
    计算机基础部分
    计算机基础
    数据库之表查询,单表、多表,子查询
    google map API 学习
  • 原文地址:https://www.cnblogs.com/ltxdzh/p/10137786.html
Copyright © 2020-2023  润新知