• ceph rbd(csi) 对接 kubernetes sc


    参考文章:ceph官网对接k8s教程

    创建池

    默认情况下,Ceph 块设备使用rbd池。为 Kubernetes 卷存储创建一个池。确保您的 Ceph 集群正在运行,然后创建池。

    $ ceph osd pool create kubernetes
    

    新创建的池必须在使用前初始化。使用rbd工具初始化池:

    $ rbd pool init kubernetes
    

    配置CEPH-CSI

    设置 CEPH 客户端身份验证

    为 Kubernetes 和 ceph -csi创建一个新用户。执行以下并记录生成的密钥:

    $ ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'
    [client.kubernetes]
        key = AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg==
    

    生成CEPH-CSI CONFIGMAP

    在CEPH-CSI需要ConfigMap存储在Kubernetes以限定一个Ceph监视器地址为Ceph的群集对象。收集 Ceph 集群唯一的fsid和监视器地址:

    $ ceph mon dump
    dumped monmap epoch 2
    epoch 2
    fsid 4d8fec26-e363-4753-b60f-49d69ab44cab
    last_changed 2021-06-11T15:58:01.818800+0800
    created 2021-06-11T15:55:49.619584+0800
    min_mon_release 15 (octopus)
    0: [v2:172.20.0.7:3300/0,v1:172.20.0.7:6789/0] mon.storage-ceph01
    1: [v2:172.20.0.11:3300/0,v1:172.20.0.11:6789/0] mon.storage-ceph03
    2: [v2:172.20.0.26:3300/0,v1:172.20.0.26:6789/0] mon.storage-ceph02
    

    生成类似于以下示例的csi-config-map.yaml文件,将fsid替换为“clusterID”,将监视器地址替换为“monitor”:

    $ cat <<EOF > csi-config-map.yaml
    ---
    apiVersion: v1
    kind: ConfigMap
    data:
      config.json: |-
        [
          {
            "clusterID": "4d8fec26-e363-4753-b60f-49d69ab44cab",
            "monitors": [
              "172.20.0.7:6789",
              "172.20.0.11:6789",
              "172.20.0.26:6789"
            ]
          }
        ]
    metadata:
      name: ceph-csi-config
    EOF
    

    生成后,将新的ConfigMap对象存储在 Kubernetes 中:

    $ kubectl apply -f csi-config-map.yaml
    

    ceph-csi 的最新版本还需要一个额外的ConfigMap对象来定义密钥管理服务 (KMS) 提供程序的详细信息。如果未设置 KMS,请将空配置放入csi-kms-config-map.yaml文件或参考https://github.com/ceph/ceph-csi/tree/master/examples/kms 中的示例:

    $ cat <<EOF > csi-kms-config-map.yaml
    ---
    apiVersion: v1
    kind: ConfigMap
    data:
      config.json: |-
        {}
    metadata:
      name: ceph-csi-encryption-kms-config
    EOF
    

    生成后,将新的ConfigMap对象存储在 Kubernetes 中:

    $ kubectl apply -f csi-kms-config-map.yaml
    

    生成CEPH -CSI CEPHX SECRET

    ceph -csi需要 cephx 凭据才能与 Ceph 集群通信。使用新创建的 Kubernetes 用户 ID 和 cephx 密钥生成一个类似于以下示例的csi-rbd-secret.yaml文件:

    $ cat <<EOF > csi-rbd-secret.yaml
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: csi-rbd-secret
      namespace: default
    stringData:
      userID: kubernetes
      # ceph auth get-key client.kubernetes 获取key,不需要base64。
      userKey: AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg==
    EOF
    

    生成后,将新的Secret对象存储在 Kubernetes 中:

    $ kubectl apply -f csi-rbd-secret.yaml
    

    配置CEPH-CSI插件

    如果下载不了下面使用的配置文件(镜像)。下面有旧版本的yaml文件,供参考...
    创建所需的ServiceAccount和 RBAC ClusterRole / ClusterRoleBinding Kubernetes 对象。这些对象不一定需要为您的 Kubernetes 环境定制,因此可以在 ceph -csi部署 YAML 中按原样使用:

    $ kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
    

    最后,创建ceph-csi配置器和节点插件。除了ceph-csi容器发布版本之外,这些对象不一定需要为您的 Kubernetes 环境定制,因此可以在ceph -csi部署 YAML 中按原样使用:

    $ wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
    $ kubectl apply -f csi-rbdplugin-provisioner.yaml
    $ wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
    $ kubectl apply -f csi-rbdplugin.yaml
    

    默认情况下,供应商和节点插件 YAML 将拉取ceph -csi容器的开发版本(quay.io/cephcsi/cephcsi:canary)。应更新 YAML 以将发布版本容器用于生产工作负载。

    使用 CEPH 块设备

    创建STORAGECLASS

    Kubernetes StorageClass定义了一类存储。 可以创建多个StorageClass对象以映射到不同的服务质量级别(即 NVMe 与基于 HDD 的池)和功能。

    例如,要创建一个映射到 上面创建的kubernetes池的ceph -csi StorageClass,在确保“clusterID”属性与您的 Ceph 集群的fsid匹配后,可以使用以下 YAML 文件:

    $ cat <<EOF > csi-rbd-sc.yaml
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
       name: csi-rbd-sc
    provisioner: rbd.csi.ceph.com
    parameters:
       clusterID: 4d8fec26-e363-4753-b60f-49d69ab44cab
       pool: kubernetes
       imageFeatures: layering
       csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
       csi.storage.k8s.io/provisioner-secret-namespace: default
       csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
       csi.storage.k8s.io/controller-expand-secret-namespace: default
       csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
       csi.storage.k8s.io/node-stage-secret-namespace: default
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    mountOptions:
       - discard
    EOF
    $ kubectl apply -f csi-rbd-sc.yaml
    

    创建一个PERSISTENTVOLUMECLAIM

    PersistentVolumeClaim是由用户为抽象存储资源的请求。然后PersistentVolumeClaim将与Pod资源相关联以提供PersistentVolume,该PersistentVolume将由 Ceph 块映像支持。可以包含一个可选的volumeMode以在挂载的文件系统(默认)或基于原始块设备的卷之间进行选择。

    使用ceph -csi,为volumeMode指定Filesystem可以支持 ReadWriteOnce和ReadOnlyMany accessMode声明,并且 为volumeMode指定Block可以支持ReadWriteOnce、ReadWriteMany和 ReadOnlyMany访问模式声明。

    例如,要使用上面创建的基于ceph-csi的StorageClass创建基于块的PersistentVolumeClaim,可以使用以下 YAML 从csi-rbd-sc StorageClass请求原始块存储:

    $ cat <<EOF > raw-block-pvc.yaml
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: raw-block-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Block
      resources:
        requests:
          storage: 1Gi
      storageClassName: csi-rbd-sc
    EOF
    $ kubectl apply -f raw-block-pvc.yaml
    
    $ kubectl get pvc
    NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    raw-block-pvc   Bound    pvc-4933eb4f-34a0-4d06-be7c-485fbadafbc9   1Gi        RWO            csi-rbd-sc     6h35m
    

    出现pending状态,查看pvc的详情 kubect describe pvc raw-block-pvc 和查看ceph集群的状态 ceph -s

    配置CEPH-CSI插件配置文件

    cat <<EOF > csi-provisioner-rbac.yaml 
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: rbd-csi-provisioner
    
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-external-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["list", "watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims/status"]
        verbs: ["update", "patch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshots"]
        verbs: ["get", "list"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshotcontents"]
        verbs: ["create", "get", "list", "watch", "update", "delete"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshotclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["volumeattachments"]
        verbs: ["get", "list", "watch", "update", "patch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["volumeattachments/status"]
        verbs: ["patch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["csinodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshotcontents/status"]
        verbs: ["update"]
      - apiGroups: [""]
        resources: ["configmaps"]
        verbs: ["get"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-csi-provisioner-role
    subjects:
      - kind: ServiceAccount
        name: rbd-csi-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: rbd-external-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      # replace with non-default namespace name
      namespace: default
      name: rbd-external-provisioner-cfg
    rules:
      - apiGroups: [""]
        resources: ["configmaps"]
        verbs: ["get", "list", "watch", "create", "update", "delete"]
      - apiGroups: ["coordination.k8s.io"]
        resources: ["leases"]
        verbs: ["get", "watch", "list", "delete", "update", "create"]
    
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-csi-provisioner-role-cfg
      # replace with non-default namespace name
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: rbd-csi-provisioner
        # replace with non-default namespace name
        namespace: default
    roleRef:
      kind: Role
      name: rbd-external-provisioner-cfg
      apiGroup: rbac.authorization.k8s.io
    EOF
    
    cat <<EOF > csi-nodeplugin-rbac.yaml 
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: rbd-csi-nodeplugin
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-csi-nodeplugin
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get"]
      # allow to read Vault Token and connection options from the Tenants namespace
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["configmaps"]
        verbs: ["get"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-csi-nodeplugin
    subjects:
      - kind: ServiceAccount
        name: rbd-csi-nodeplugin
        namespace: default
    roleRef:
      kind: ClusterRole
      name: rbd-csi-nodeplugin
      apiGroup: rbac.authorization.k8s.io
    EOF
    
    cat <<EOF > csi-rbdplugin-provisioner.yaml 
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: csi-rbdplugin-provisioner
      labels:
        app: csi-metrics
    spec:
      selector:
        app: csi-rbdplugin-provisioner
      ports:
        - name: http-metrics
          port: 8080
          protocol: TCP
          targetPort: 8680
    
    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: csi-rbdplugin-provisioner
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: csi-rbdplugin-provisioner
      template:
        metadata:
          labels:
            app: csi-rbdplugin-provisioner
        spec:
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: app
                        operator: In
                        values:
                          - csi-rbdplugin-provisioner
                  topologyKey: "kubernetes.io/hostname"
          serviceAccountName: rbd-csi-provisioner
          priorityClassName: system-cluster-critical
          containers:
            - name: csi-provisioner
              image: quay.io/k8scsi/csi-provisioner:v2.0.4
              args:
                - "--csi-address=$(ADDRESS)"
                - "--v=5"
                - "--timeout=150s"
                - "--retry-interval-start=500ms"
                - "--leader-election=true"
                #  set it to true to use topology based provisioning
                - "--feature-gates=Topology=false"
                # if fstype is not specified in storageclass, ext4 is default
                - "--default-fstype=ext4"
                - "--extra-create-metadata=true"
              env:
                - name: ADDRESS
                  value: unix:///csi/csi-provisioner.sock
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
            - name: csi-snapshotter
              # 如果能下载官方的镜像,建议使用官方的。下载不到可以使用下面的镜像。
              image: quay.io/k8scsi/csi-snapshotter:v4.0.0
              args:
                - "--csi-address=$(ADDRESS)"
                - "--v=5"
                - "--timeout=150s"
                - "--leader-election=true"
              env:
                - name: ADDRESS
                  value: unix:///csi/csi-provisioner.sock
              imagePullPolicy: "IfNotPresent"
              securityContext:
                privileged: true
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
            - name: csi-attacher
              image: quay.io/k8scsi/csi-attacher:v3.0.2
              args:
                - "--v=5"
                - "--csi-address=$(ADDRESS)"
                - "--leader-election=true"
                - "--retry-interval-start=500ms"
              env:
                - name: ADDRESS
                  value: /csi/csi-provisioner.sock
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
            - name: csi-resizer
              image: quay.io/k8scsi/csi-resizer:v1.0.1
              args:
                - "--csi-address=$(ADDRESS)"
                - "--v=5"
                - "--timeout=150s"
                - "--leader-election"
                - "--retry-interval-start=500ms"
                - "--handle-volume-inuse-error=false"
              env:
                - name: ADDRESS
                  value: unix:///csi/csi-provisioner.sock
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
            - name: csi-rbdplugin
              securityContext:
                privileged: true
                capabilities:
                  add: ["SYS_ADMIN"]
              # for stable functionality replace canary with latest release version
              image: quay.io/cephcsi/cephcsi:canary
              args:
                - "--nodeid=$(NODE_ID)"
                - "--type=rbd"
                - "--controllerserver=true"
                - "--endpoint=$(CSI_ENDPOINT)"
                - "--v=5"
                - "--drivername=rbd.csi.ceph.com"
                - "--pidlimit=-1"
                - "--rbdhardmaxclonedepth=8"
                - "--rbdsoftmaxclonedepth=4"
                - "--enableprofiling=false"
              env:
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: NODE_ID
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                # - name: POD_NAMESPACE
                #   valueFrom:
                #     fieldRef:
                #       fieldPath: spec.namespace
                - name: KMS_CONFIGMAP_NAME
                  value: encryptionConfig
                - name: CSI_ENDPOINT
                  value: unix:///csi/csi-provisioner.sock
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
                - mountPath: /dev
                  name: host-dev
                - mountPath: /sys
                  name: host-sys
                - mountPath: /lib/modules
                  name: lib-modules
                  readOnly: true
                - name: ceph-csi-config
                  mountPath: /etc/ceph-csi-config/
                - name: ceph-csi-encryption-kms-config
                  mountPath: /etc/ceph-csi-encryption-kms-config/
                - name: keys-tmp-dir
                  mountPath: /tmp/csi/keys
            - name: csi-rbdplugin-controller
              securityContext:
                privileged: true
                capabilities:
                  add: ["SYS_ADMIN"]
              # for stable functionality replace canary with latest release version
              image: quay.io/cephcsi/cephcsi:canary
              args:
                - "--type=controller"
                - "--v=5"
                - "--drivername=rbd.csi.ceph.com"
                - "--drivernamespace=$(DRIVER_NAMESPACE)"
              env:
                - name: DRIVER_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: ceph-csi-config
                  mountPath: /etc/ceph-csi-config/
                - name: keys-tmp-dir
                  mountPath: /tmp/csi/keys
            - name: liveness-prometheus
              image: quay.io/cephcsi/cephcsi:canary
              args:
                - "--type=liveness"
                - "--endpoint=$(CSI_ENDPOINT)"
                - "--metricsport=8680"
                - "--metricspath=/metrics"
                - "--polltime=60s"
                - "--timeout=3s"
              env:
                - name: CSI_ENDPOINT
                  value: unix:///csi/csi-provisioner.sock
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
              imagePullPolicy: "IfNotPresent"
          volumes:
            - name: host-dev
              hostPath:
                path: /dev
            - name: host-sys
              hostPath:
                path: /sys
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: socket-dir
              emptyDir: {
                medium: "Memory"
              }
            - name: ceph-csi-config
              configMap:
                name: ceph-csi-config
            - name: ceph-csi-encryption-kms-config
              configMap:
                name: ceph-csi-encryption-kms-config
            - name: keys-tmp-dir
              emptyDir: {
                medium: "Memory"
              }
    EOF
    
    cat <<EOF > csi-rbdplugin.yaml 
    ---
    kind: DaemonSet
    apiVersion: apps/v1
    metadata:
      name: csi-rbdplugin
    spec:
      selector:
        matchLabels:
          app: csi-rbdplugin
      template:
        metadata:
          labels:
            app: csi-rbdplugin
        spec:
          serviceAccountName: rbd-csi-nodeplugin
          hostNetwork: true
          hostPID: true
          priorityClassName: system-node-critical
          # to use e.g. Rook orchestrated cluster, and mons' FQDN is
          # resolved through k8s service, set dns policy to cluster first
          dnsPolicy: ClusterFirstWithHostNet
          containers:
            - name: driver-registrar
              # This is necessary only for systems with SELinux, where
              # non-privileged sidecar containers cannot access unix domain socket
              # created by privileged CSI driver container.
              securityContext:
                privileged: true
              image: quay.io/k8scsi/csi-node-driver-registrar:v2.0.1
              args:
                - "--v=5"
                - "--csi-address=/csi/csi.sock"
                - "--kubelet-registration-path=/data/k8s/data/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
              env:
                - name: KUBE_NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
                - name: registration-dir
                  mountPath: /registration
            - name: csi-rbdplugin
              securityContext:
                privileged: true
                capabilities:
                  add: ["SYS_ADMIN"]
                allowPrivilegeEscalation: true
              # for stable functionality replace canary with latest release version
              image: quay.io/cephcsi/cephcsi:canary
              args:
                - "--nodeid=$(NODE_ID)"
                - "--type=rbd"
                - "--nodeserver=true"
                - "--endpoint=$(CSI_ENDPOINT)"
                - "--v=5"
                - "--drivername=rbd.csi.ceph.com"
                - "--enableprofiling=false"
                # If topology based provisioning is desired, configure required
                # node labels representing the nodes topology domain
                # and pass the label names below, for CSI to consume and advertise
                # its equivalent topology domain
                # - "--domainlabels=failure-domain/region,failure-domain/zone"
              env:
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: NODE_ID
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                # - name: POD_NAMESPACE
                #   valueFrom:
                #     fieldRef:
                #       fieldPath: spec.namespace
                - name: KMS_CONFIGMAP_NAME
                  value: encryptionConfig
                - name: CSI_ENDPOINT
                  value: unix:///csi/csi.sock
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
                - mountPath: /dev
                  name: host-dev
                - mountPath: /sys
                  name: host-sys
                - mountPath: /run/mount
                  name: host-mount
                - mountPath: /lib/modules
                  name: lib-modules
                  readOnly: true
                - name: ceph-csi-config
                  mountPath: /etc/ceph-csi-config/
                - name: ceph-csi-encryption-kms-config
                  mountPath: /etc/ceph-csi-encryption-kms-config/
                - name: plugin-dir
                  mountPath: /data/k8s/data/kubelet/plugins
                  mountPropagation: "Bidirectional"
                - name: mountpoint-dir
                  mountPath: /data/k8s/data/kubelet/pods
                  mountPropagation: "Bidirectional"
                - name: keys-tmp-dir
                  mountPath: /tmp/csi/keys
            - name: liveness-prometheus
              securityContext:
                privileged: true
              image: quay.io/cephcsi/cephcsi:canary
              args:
                - "--type=liveness"
                - "--endpoint=$(CSI_ENDPOINT)"
                - "--metricsport=8680"
                - "--metricspath=/metrics"
                - "--polltime=60s"
                - "--timeout=3s"
              env:
                - name: CSI_ENDPOINT
                  value: unix:///csi/csi.sock
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
              volumeMounts:
                - name: socket-dir
                  mountPath: /csi
              imagePullPolicy: "IfNotPresent"
          volumes:
            - name: socket-dir
              hostPath:
                path: /data/k8s/data/kubelet/plugins/rbd.csi.ceph.com
                type: DirectoryOrCreate
            - name: plugin-dir
              hostPath:
                path: /data/k8s/data/kubelet/plugins
                type: Directory
            - name: mountpoint-dir
              hostPath:
                path: /data/k8s/data/kubelet/pods
                type: DirectoryOrCreate
            - name: registration-dir
              hostPath:
                # kubelet 数据目录 默认:/data/k8s/data/kubelet/plugins_registry/
                path: /data/k8s/data/kubelet/plugins_registry/
                type: Directory
            - name: host-dev
              hostPath:
                path: /dev
            - name: host-sys
              hostPath:
                path: /sys
            - name: host-mount
              hostPath:
                path: /run/mount
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: ceph-csi-config
              configMap:
                name: ceph-csi-config
            - name: ceph-csi-encryption-kms-config
              configMap:
                name: ceph-csi-encryption-kms-config
            - name: keys-tmp-dir
              emptyDir: {
                medium: "Memory"
              }
    ---
    # This is a service to expose the liveness metrics
    apiVersion: v1
    kind: Service
    metadata:
      name: csi-metrics-rbdplugin
      labels:
        app: csi-metrics
    spec:
      ports:
        - name: http-metrics
          port: 8080
          protocol: TCP
          targetPort: 8680
      selector:
        app: csi-rbdplugin
    EOF
    
  • 相关阅读:
    SSM集成Mybatis和Druid
    SpringMVC集成Thymeleaf
    最简单的SpringMVC + Maven配置
    信息化平台架构设计
    TaskSchedule-任务调度系统设计
    Redis的类库封装设计
    [oracle] DBLINK +同义词,实现本地数据库访问另一台机器的数据库
    [Tomcat 部署问题] Undeployment Failure could not be redeployed ...
    [oracle原]访问局域网内出现“ORA-12541:TNS:无监听程序”
    [java插件]myeclipse添加插件
  • 原文地址:https://www.cnblogs.com/mycloudedu/p/14880660.html
Copyright © 2020-2023  润新知