• 在 kubernetes pod 中构建 docker image


    更好的阅读体验建议点击下方原文链接。
    原文地址:http://maoqide.live/post/cloud/build-docker-image-in-a-pod-in-kubernetes/
    利用 kaniko 在 kubernetes 集群中使用 pod 来进行镜像构建,并 push 到镜像仓库中。

    pre-request

    • kubernetes cluster
    • kaniko-executor image

    构建

    具体用法可以阅读 kaniko 项目的 README 文档,项目本身支持GCS BucketS3 BucketLocal DirectoryGit Repository四种 buildcontext 存储方案,在实际的使用中,感觉使用内部的文件服务器更加方便,添加了对直接 http/https 下载链接的支持,https://github.com/maoqide/kaniko。

    quick start

    build yourself a image for kaniko executor

    # build yourself a image for kaniko executor
    cd $GOPATH/src/github.com/GoogleContainerTools
    git clone https://github.com/maoqide/kaniko
    make images
    

    start a file server if using http/https for buildcontext

    kaniko's build context is very similar to the build context you would send your Docker daemon for an image build; it represents a directory containing a Dockerfile which kaniko will use to build your image. For example, a COPY command in your Dockerfile should refer to a file in the build context.    
    

    using minio as file server:

    docker run -p 9000:9000 minio/minio server /data
    

    create context.tar.gz

    # tar your build context including Dockerfile into context.tar.gz
    tar -C <path to build context> -zcvf context.tar.gz .
    

    upload to minio and generate a download url.

    create secret on kubernetes

    # registry can also be a harbor or other registry.
    kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
    

    create pod

    apiVersion: v1
    kind: Pod
    metadata:
      name: kaniko
    spec:
      containers:
      - name: kaniko
        env:
        - name: DOCKER_CONFIG
          value: /root/.docker/
        image: harbor.guahao-inc.com/mqtest/executor
        args: [ "--context=http://download_url/context.tar.gz",
                "--destination=maoqide/test:latest",
                "--verbosity=debug",
    		]
        volumeMounts:
          - name: kaniko-secret
            mountPath: /root
          - name: context
            mountPath: /kaniko/buildcontext/
      restartPolicy: Never
      volumes:
        - name: context
          emptyDir: {}
        - name: kaniko-secret
          secret:
            secretName: regcred
            items:
              - key: .dockerconfigjson
                path: .docker/config.json
    

    env DOCKER_CONFIG is required for regidtry authorization, otherwise you would get an UNAUTHORIZED error.

    kubectl create -f pod.yaml
    
    [root@centos10 ~]$ kubectl get po
    NAME                      READY     STATUS      RESTARTS   AGE
    kaniko                    0/1       Completed   0          5h
    

    and you can find your image pushed.

  • 相关阅读:
    Oracle行转列,pivot函数和unpivot函数
    hive中使用spark执行引擎的常用参数
    Spark消费Kafka如何实现精准一次性消费?
    Flink 保证ExactlyOnce
    Flink的容错
    scala实现kafkaProduce1.0读取文件发送到kafka
    flume1.5的几种conf配置
    shell:ps awk杀死进程
    scala的maven项目中的pom文件
    hive开窗函数进阶
  • 原文地址:https://www.cnblogs.com/maoqide/p/11748033.html
Copyright © 2020-2023  润新知