• k8s基于NFS创建动态存储StorageClass


    简介

    nfs-subdir-external-provisioner是一个自动供应器,它使用现有的NFS 服务来支持通过 Persistent Volume Claims 动态持久卷在nfs服务器持久卷被配置为${namespace}-${pvcName}-${pvName}

    NFS-Subdir-External-Provisioner此组件是对nfs-client-provisioner 的扩展,GitHub地址 https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

    部署nfs

    所以节点必须安装nfs-utils

    # 具体配置过程略,这里仅看下nfs配置
    /xxxx/data/nfs1/       *(rw,sync,no_root_squash)
    

    配置Storageclass

    • 创建授权
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      namespace: default
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      namespace: default
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    
    
    • 部署 NFS-Subdir-External-Provisioner
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: nfs-client #---nfs-provisioner的名称,以后设置的storageclass要和这个保持一致
                - name: NFS_SERVER
                  value: 10.10.10.21 #nfs服务器的地址
                - name: NFS_PATH
                  value: /epailive/data/nfs1 #nfs路径
          volumes:
            - name: nfs-client-root
              nfs:
                server: 10.10.10.21
                path: /epailive/data/nfs1 #nfs路径
    
    • 创建Storageclass
    #cat storageclass.yaml 
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: nfs-storage
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"  #---设置为默认的storageclass
    provisioner: nfs-client  #---动态卷分配者名称,必须和上面创建的"PROVISIONER_NAME"变量中设置的Name一致
    parameters:
      archiveOnDelete: "true"  #---设置为"false"时删除PVC不会保留数据,"true"则保留数据
    
    • 创建pvc测试下
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-chaim
    spec:
      storageClassName: nfs-storage #---需要与上面创建的storageclass的名称一致
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi #需要的资源大小根据自己的实际情况修改
    
    

    查看pv,pvc的状态

    image-20211025102351758

    查看nfs自动创建的数据

    进入nfs共享共享目录查看volume name的目录已经创建出来了。其中volume的名字是namespace,PVC name以及uuid的组合

    image-20211025101215775

    测试pod

    cat > test-pod.yaml <<EOF
    kind: Pod
    apiVersion: v1
    metadata:
      name: test-pod
    spec:
      containers:
      - name: test-pod
        image: busybox:latest
        command:
          - "/bin/sh"
        args:
          - "-c"
          - "touch /mnt/index.html && echo firsh>>/mnt/index.html && exit 0 || exit 1"
        volumeMounts:
          - name: nfs-pvc
            mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
        - name: nfs-pvc
          persistentVolumeClaim:
            claimName: test-claim
    
    EOF
    
    

    查看pod在nfs下创建的文件

    image-20211025102002829


    【注意!!!】

    关于在k8s-v1.20以上版本使用nfs作为storageclass出现selfLink was empty, can‘t make reference

    在使用nfs创建storageclass 实现存储的动态加载
    分别创建 rbac、nfs-deployment、nfs-storageclass之后都正常运行
    但在创建pvc时一直处于pending状态
    kubectl describe pvc test-claim 查看pvc信息提示如下

    image-20211022152356787

      Normal  ExternalProvisioning  13s (x2 over 25s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs-client" or manually created by system administrator
    

    然后查看kubectl logs nfs-client-provisioner-6df55f9474-fdnpc的日志有如下信息:

    image-20211022152300554

    E1022 07:01:24.615869       1 controller.go:1004] provision "default/test-claim" class "nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
    

    selfLink was empty 在k8s集群 v1.20之前都存在,在v1.20之后被删除,需要在/etc/kubernetes/manifests/kube-apiserver.yaml 添加参数
    增加 - --feature-gates=RemoveSelfLink=false

    spec:
      containers:
      - command:
        - kube-apiserver
        - --feature-gates=RemoveSelfLink=false
    

    添加之后使用kubeadm部署的集群会自动加载部署pod

    kubeadm安装的apiserver是Static Pod,它的配置文件被修改后,立即生效。
    Kubelet 会监听该文件的变化,当您修改了 /etc/kubenetes/manifest/kube-apiserver.yaml 文件之后,kubelet 将自动终止原有的 kube-apiserver-{nodename} 的 Pod,并自动创建一个使用了新配置参数的 Pod 作为替代。
    如果您有多个 Kubernetes Master 节点,您需要在每一个 Master 节点上都修改该文件,并使各节点上的参数保持一致。
    

    这里需注意如果api-server启动失败 需重新在执行一遍

    kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
    1
    

    image-20211022153332581
    在nfs服务端就看到pvc目录了
    image-20211022153443253

    这个问题已经在github上有详细介绍
    https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25

    更多精彩关注公众号“51运维com” 个人博客

  • 相关阅读:
    Docker 容器测试全探索
    Terminix:基于 GTK3 的平铺式 Linux 终端模拟器
    五条强化 SSH 安全的建议
    LXD 2.0 系列(二):安装与配置
    (转)分享一个技巧,利用批处理调用ruby脚本(可能你为路径苦恼)
    泛型介绍(接上一篇,具体的事例随后呈上)
    看到他我一下子就悟了-- 泛型(1)
    EXTJS4 Grid Filter 插件的使用 与后台数据解析------Extjs 查询筛选功能的实现
    Extjs4.2 rest 与webapi数据交互----顺便请教了程序员的路该怎么走
    Extjs 项目中常用的小技巧,也许你用得着(3)
  • 原文地址:https://www.cnblogs.com/xull0651/p/15457326.html
Copyright © 2020-2023  润新知