• 【转】K8S中部署Helm


    K8S中的包管理工具

    1. 客户端Helm(即Helm)

     通过脚本安装:curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > helm.sh,赋权运行:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    chmod +x helm.sh
    ./helm.sh

    # 输出
    Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.13.1-linux-amd64.tar.gz
    Preparing to install helm and tiller into /usr/local/bin
    helm installed into /usr/local/bin/helm
    tiller installed into /usr/local/bin/tiller
    Run 'helm init' to configure helm.

    # 验证
    helm help

    注:可能在执行脚本时出现curl: (7) Failed connect to kubernetes-helm.storage.googleapis.com:443; 网络不可达异常信息,多执行几次即可。

    2. 服务端Tiller

    直接helm init,即可在K8S集群中安装Tiller(在kube-system命名空间中),但执行的时虽然提示成功了,但K8S查看容器状态发现有Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.13.1"....的异常,查看tiller-deployment的yaml文件发现容器的镜像为gcr.io/kubernetes-helm/tiller:v2.13.1,拉不到,去dockerhub上查看谷歌复制镜像命名空间中mirrorgooglecontainers是否有,没有又查看是否有用户镜像docker search tiller:v2.13.1,拉取一个用户的镜像,修改tag、删除旧的(建议在每个节点都干一下,选择器可能没有指定):

    1
    2
    3
    docker pull hekai/gcr.io_kubernetes-helm_tiller_v2.13.1
    docker tag hekai/gcr.io_kubernetes-helm_tiller_v2.13.1 gcr.io/kubernetes-helm/tiller:v2.13.1
    docker rmi hekai/gcr.io_kubernetes-helm_tiller_v2.13.1

    再次查看pod已经成功。

    tiller授权:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    # 设置账号
    kubectl create serviceaccount --namespace kube-system tiller
    kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

    # 使用 kubectl patch 更新 API 对象
    kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

    # 查看授权是否成功
    kubectl get deploy --namespace kube-system tiller-deploy --output yaml|grep serviceAccount

    serviceAccount: tiller
    serviceAccountName: tiller


    helm version

    Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

    卸载tiller:helm resethelm reset --force

    3. 使用

    创建Helm chart(Helm中的包的形式叫做chart):

    1
    2
    3
    4
    5
    6
    # 拉取测试代码
    git clone https://github.com/daemonza/testapi.git;

    cd testapi
    # 创建chart骨架
    helm create testapi-chart

    生成的chart骨架为:

    testapi-chart
    ├── charts
    ├── Chart.yaml
    ├── templates
    │ ├── deployment.yaml
    │ ├── _helpers.tpl
    │ ├── ingress.yaml
    │ ├── NOTES.txt
    | ├── service.yaml
    │ └── tests
    └── values.yaml

    其中templates目录中存放的是K8S部署文件的模板,Chart.yaml文件如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # chartAPI的版本,必须只能设为v1
    apiVersion: v1
    # 可选参数
    appVersion: "1.0"
    # 可选参数
    description: A Helm chart for Kubernetes
    # chart的名字,必选参数
    name: testapi-chart
    # chart的版本号,必选参数,必须符合SemVer
    version: 0.1.0

    其中values.yaml文件如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    # Default values for testapi-chart.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.

    replicaCount: 1

    image:
    repository: nginx
    tag: stable
    pullPolicy: IfNotPresent

    nameOverride: ""
    fullnameOverride: ""

    service:
    type: ClusterIP
    port: 80

    ingress:
    enabled: false
    annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    hosts:
    - host: chart-example.local
    paths: []

    tls: []
    # - secretName: chart-example-tls
    # hosts:
    # - chart-example.local

    resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    # cpu: 100m
    # memory: 128Mi
    # requests:
    # cpu: 100m
    # memory: 128Mi

    nodeSelector: {}

    tolerations: []

    affinity: {}

    可以进入Chart.yaml所在目录运行Chart:

    1
    2
    3
    4
    cd testapi-chart

    # 运行chart
    helm lint

    一切OK的话可以进行打包(在Chart.yaml的父目录外):

    1
    2
    3
    4
    5
    6
    # --debug标识可选,加上可以看到输出,testapi-chart是要打包的chart目录,打出的包在当前目录下
    helm package testapi-chart --debug

    # 输出
    Successfully packaged chart and saved it to: /root/k8s/helm/testapi/testapi-chart-0.1.0.tgz
    [debug] Successfully saved /root/k8s/helm/testapi/testapi-chart-0.1.0.tgz to /root/.helm/repository/local

    现在打包出来在当前目录,也可以直接发布到本地的helm仓库:helm install testapi-chart-0.1.0.tgz,输出如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    NAME:   lumbering-zebu
    LAST DEPLOYED: Fri Apr 26 18:54:26 2019
    NAMESPACE: default
    STATUS: DEPLOYED

    RESOURCES:
    ==> v1/Deployment
    NAME READY UP-TO-DATE AVAILABLE AGE
    lumbering-zebu-testapi-chart 0/1 1 0 0s

    ==> v1/Pod(related)
    NAME READY STATUS RESTARTS AGE
    lumbering-zebu-testapi-chart-7fb48fc7b6-n6824 0/1 ContainerCreating 0 0s

    ==> v1/Service
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    lumbering-zebu-testapi-chart ClusterIP 10.97.1.55 <none> 80/TCP 0s


    NOTES:
    1. Get the application URL by running these commands:
    export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=testapi-chart,app.kubernetes.io/instance=lumbering-zebu" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:8080 to use your application"
    kubectl port-forward $POD_NAME 8080:80

    上述就已经在K8S中创建了deployment,查看默认的命名空间就可以发现多了一个lumbering-zebu-testapi-chart的Deployment,可以查看deployment的包:

    1
    2
    3
    4
    5
    helm ls

    # 输出
    NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    lumbering-zebu 1 Fri Apr 26 18:54:26 2019 DEPLOYED testapi-chart-0.1.0 1.0 default

    修改Chart的打包版本0.1.0–>0.1.1,再次执行打包、发布,再次查看:

    1
    2
    3
    4
    5
    kubectl get deployments

    NAME READY UP-TO-DATE AVAILABLE AGE
    lumbering-zebu-testapi-chart 1/1 1 1 13m
    odd-chicken-testapi-chart 1/1 1 1 85s

    出现2个了,现在需要删除旧版本的deployment的chart:helm delete lumbering-zebu-testapi-chart,通过helm lskubectl get pods可以发现旧版本的deployment都已经被删除。删除后同样可以回滚:

    1
    2
    3
    4
    5
    6
    # 将testapi包按顺序回滚1次修改,注意不带-testapi-chart
    helm rollback lumbering-zebu 1
    # 输出
    Rollback was a success! Happy Helming!
    # 验证
    helm ls

    但这种情况必须记得删除包的名字,实际可以通过helm ls --deleted查看已删除包的名字。

     升级,可以在修改相关的Chart.yaml文件后,直接在其所在目录运行helm upgrade odd-chicken .命令即可更新:

    1
    2
    3
    4
    5
    # 验证
    helm ls
    # 版本号已变
    NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    odd-chicken 2 Fri Apr 26 19:26:21 2019 DEPLOYED testapi-chart2-2.1.1 2.0 default

    【设置Helm仓库】

     越来越觉得这东西像mvn了,Helm的仓库就是一个WEB服务器,如从charts目录提供helm服务:helm serve --repo-path ./charts。此外关于Chart服务的管理可能需要安装Monocular来提供WEB页面,安装步骤如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    # 拉取所需要的镜像
    docker pull registry.cn-shanghai.aliyuncs.com/hhu/defaultbackend:1.4
    docker tag registry.cn-shanghai.aliyuncs.com/hhu/defaultbackend:1.4 k8s.gcr.io/defaultbackend:1.4
    docker rmi registry.cn-shanghai.aliyuncs.com/hhu/defaultbackend:1.4

    # 安装Nginx Ingress controller
    helm install stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true

    # 添加源(最新的源)
    helm repo add monocular https://helm.github.io/monocular
    # 安装monocular
    helm install monocular/monocular

    然后等待,安装完了之后,即可通过

    【使用Helm仓库】

     使用Helmet作为Helm仓库,可以将它部署到K8S集群中并添加Chart。

    转自:https://blog.wgl.wiki/K8S%E4%B8%AD%E9%83%A8%E7%BD%B2Helm/

  • 相关阅读:
    2020.10.10收获(动手动脑三)
    2020.10.8收获
    2020.10.4收获
    2020.10.11收获
    2020.10.6收获
    2020.10.7收获(动手动脑二)
    2020.10.9收获
    2020.10.3收获
    2020.10.2收获
    2020.10.5收获
  • 原文地址:https://www.cnblogs.com/wangshuyang/p/12303524.html
Copyright © 2020-2023  润新知