• k8s持久化存储PV、PVC、StorageClass


    k8s持久化存储

    1. 以前数据持久化方式

      通过volumes 数据卷挂载

    1. web3.yaml 内容如下:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      creationTimestamp: null
      labels:
        app: web3
      name: web3
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: web3
      strategy: {}
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: web3
        spec:
          containers:
          - image: nginx
            name: nginx
            resources: {}
            volumeMounts:
            - name: varlog
              mountPath: /tmp/log
          volumes:
          - name: varlog
            hostPath:
              path: /tmp/log/web3log
    status: {}

    2. 创建资源后查看

    [root@k8smaster1 volumestest]# kubectl get pods | grep web3
    web3-6c6557674d-xt7kr                           1/1     Running   0          6m38s
    [root@k8smaster1 volumestest]# kubectl describe pods web3-6c6557674d-xt7kr

    相关挂载信息如下:

     3. 到容器内部创建一个文件

    [root@k8smaster1 volumestest]# kubectl exec -it web3-6c6557674d-xt7kr
    error: you must specify at least one command for the container
    [root@k8smaster1 volumestest]# kubectl exec -it web3-6c6557674d-xt7kr -- bash
    root@web3-6c6557674d-xt7kr:/# echo "123" > /tmp/log/test.txt
    root@web3-6c6557674d-xt7kr:/# exit
    exit

    4. 到pod 调度的节点查看宿主机目录是否挂载成功

    (1) master 节点查看pod 调度节点

    [root@k8smaster1 volumestest]# kubectl get pods -o wide | grep web3
    web3-6c6557674d-xt7kr                           1/1     Running   0          11m     10.244.2.108     k8snode2     <none>           <none>

    (2) 到k8snode2 节点查看

    [root@k8snode2 web3log]# ll
    total 4
    -rw-r--r-- 1 root root 4 Jan 21 05:49 test.txt
    [root@k8snode2 web3log]# cat test.txt 
    123

    5. 测试k8snode2 节点宕机,pod 自动调度到k8snode1 节点再次查看

    [root@k8smaster1 volumestest]# kubectl get pods -o wide | grep web3
    web3-6c6557674d-6wlh4                           1/1     Running       0          4m22s   10.244.1.110     k8snode1     <none>           <none>
    web3-6c6557674d-xt7kr                           1/1     Terminating   0          22m     10.244.2.108     k8snode2     <none>           <none>
    [root@k8smaster1 volumestest]# kubectl exec -it web3-6c6557674d-6wlh4 -- bash
    root@web3-6c6557674d-6wlh4:/# ls /tmp/log/
    root@web3-6c6557674d-6wlh4:/# 

       发现自动调度到k8snode1 节点,进入容器之后发现之前新建的文件丢失。

    6. 从k8snode1 宿主机查看发现也没有文件

    [root@k8snode1 web3log]# pwd
    /tmp/log/web3log
    [root@k8snode1 web3log]# ls
    [root@k8snode1 web3log]# 

      造成的现象就是pod 所在的节点宕机后,volume 数据卷挂载的文件也丢失,因此需要一种解决方案。

    1. nfs 持久化存储

      网络文件系统,是一种共享文件系统,实际上相当于客户端将文件上传到服务器,实现共享。

    1. 下载nfs

    1. 找一台服务器安装nfs

    (1) 安装nfs以及查看nfs 服务状态

    yum install -y nfs-utils

    (2) 设置挂载路径, 注意需要将挂载路径创建出来

    [root@k8smaster2 logs]# cat /etc/exports
    /data/nfs *(rw,no_root_squash)

    解释: rw 代表读写访问, no_root_squash 代表root 用户具有根目录的完全管理访问权限

    2. k8s 集群node 节点安装nfs-utils

    yum install -y nfs-utils

    3. nfs 服务器启动nfs 服务且查看服务状态

    [root@k8smaster2 nfs]# systemctl start nfs    # 启动nfs
    [root@k8smaster2 nfs]# systemctl status nfs    # 查看状态
    ● nfs-server.service - NFS server and services
       Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
       Active: active (exited) since Fri 2022-01-21 19:55:38 EST; 5min ago
      Process: 51947 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
      Process: 51943 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
      Process: 51941 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
      Process: 51977 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
      Process: 51960 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
      Process: 51958 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
     Main PID: 51960 (code=exited, status=0/SUCCESS)
        Tasks: 0
       Memory: 0B
       CGroup: /system.slice/nfs-server.service
    
    Jan 21 19:55:38 k8smaster2 systemd[1]: Starting NFS server and services...
    Jan 21 19:55:38 k8smaster2 systemd[1]: Started NFS server and services.
    [root@k8smaster2 nfs]# showmount -e localhost    # 查看挂载的nfs 信息
    Export list for localhost:
    /data/nfs *

    也可以查看nfs 的进程信息

    [root@k8smaster2 nfs]# ps -ef | grep nfs
    root      51962      2  0 19:55 ?        00:00:00 [nfsd4_callbacks]
    root      51968      2  0 19:55 ?        00:00:00 [nfsd]
    root      51969      2  0 19:55 ?        00:00:00 [nfsd]
    root      51970      2  0 19:55 ?        00:00:00 [nfsd]
    root      51971      2  0 19:55 ?        00:00:00 [nfsd]
    root      51972      2  0 19:55 ?        00:00:00 [nfsd]
    root      51973      2  0 19:55 ?        00:00:00 [nfsd]
    root      51974      2  0 19:55 ?        00:00:00 [nfsd]
    root      51975      2  0 19:55 ?        00:00:00 [nfsd]
    root      54774  45013  0 20:02 pts/2    00:00:00 grep --color=auto nfs

    2. 客户端安装

     1. 在所有k8snode 节点安装客户端,并且查看远程nfs 信息

    yum install -y nfs-utils

    2. 查看远程信息

    [root@k8snode1 ~]# showmount -e 192.168.13.106
    Export list for 192.168.13.106:
    /data/nfs *

    3. 本地测试nfs

    (1) 创建挂载并进行测试

    [root@k8snode1 ~]# mkdir /share
    [root@k8snode1 ~]# mount 192.168.13.106:/data/nfs /share
    [root@k8snode1 ~]# df -h | grep 13.106
    192.168.13.106:/data/nfs   17G   12G  5.4G  69% /share

    (2) node 节点创建文件

    [root@k8snode1 ~]# echo "hello from 104" >> /share/104.txt
    [root@k8snode1 ~]# cat /share/104.txt 
    hello from 104

    (3) nfs 服务器查看

    [root@k8smaster2 nfs]# cat 104.txt 
    hello from 104

    (4) 客户端取消挂载

    [root@k8snode1 ~]# umount /share
    [root@k8snode1 ~]# df -h | grep 13.106

    取消挂载之后,nfs 服务器上的文件仍然存在。

    3. k8s 集群使用nfs

    1. 编写nfs-nginx.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-dep1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            volumeMounts:
            - name: wwwroot
              mountPath: /usr/share/nginx/html
            ports:
            - containerPort: 80
          volumes:
            - name: wwwroot
              nfs:
                server: 192.168.13.106
                path: /data/nfs

    2. 创建资源

    [root@k8smaster1 nfs]# kubectl apply -f nfs-nginx.yaml 
    deployment.apps/nginx-dep1 created

    然后查看pod describe 信息

    3. 我们进入容器然后创建一个文件导/usr/share/nginx/html

    root@nginx-dep1-6d7f9c85dc-lqfbf:/# cat /usr/share/nginx/html/index.html 
    hello

    4. 然后到nfs 服务器查看

    [root@k8smaster2 nfs]# pwd
    /data/nfs
    [root@k8smaster2 nfs]# ls
    104.txt  index.html
    [root@k8smaster2 nfs]# cat index.html 
    hello

    4. pv 和 pvc

    上面使用nfs 有一个问题,就是每个需要持久化的都需要知道远程nfs 服务器的地址以及相关权限,可能不太安全。下面研究pv和pvc 使用。

    pv pvc 对应PersistentVolume和PersistentVolumeClaim。  pv 类似于一个声明nfs 地址等信息,抽象成配置文件; pvc 通过引用pv 中声明的信息,然后即可实现nfs 持久化存储。

    pv 有好多实现方式,实际上是对nfs进行一层包装,因为我们已经安装了nfs, 所以基于nfs 实现。

    参考: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

    1.  创建pv

    1. 创建 pv.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-pv
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      nfs:
        path: /data/nfs
        server: 192.168.13.106

    2. 创建并查看

    [root@k8smaster1 nfs]# kubectl apply -f pv.yaml 
    persistentvolume/my-pv created
    [root@k8smaster1 nfs]# kubectl get pv -o wide
    NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE    VOLUMEMODE
    my-pv   5Gi        RWX            Retain           Available                                   2m4s   Filesystem

    补充: 关于PV的一些核心概念

    1. test-pv.yml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name:  pv2
    spec:
      capacity: 
        storage: 1Gi
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      nfs:
        path: /data/nfs
        server: 192.168.13.106

    2. 执行创建并且查看

    [root@k8smaster1 storageclass]# kubectl apply -f test-pv.yml 
    persistentvolume/pv2 created
    [root@k8smaster1 storageclass]# kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS         REASON   AGE
    pv2                                        1Gi        RWO            Recycle          Available                                                    5s
    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound       default/test-pvc   course-nfs-storage            27m
    [root@k8smaster1 storageclass]# kubectl describe pv pv2
    Name:            pv2
    Labels:          <none>
    Annotations:     Finalizers:  [kubernetes.io/pv-protection]
    StorageClass:    
    Status:          Available
    Claim:           
    Reclaim Policy:  Recycle
    Access Modes:    RWO
    VolumeMode:      Filesystem
    Capacity:        1Gi
    Node Affinity:   <none>
    Message:         
    Source:
        Type:      NFS (an NFS mount that lasts the lifetime of a pod)
        Server:    192.168.13.106
        Path:      /data/nfs
        ReadOnly:  false
    Events:        <none>

    3. 核心概念

    (1) Capacity(存储能力)

      一般来说,一个 PV 对象都要指定一个存储能力,通过 PV 的 capacity属性来设置的,目前只支持存储空间的设置,就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。

    (2) AccessModes(访问模式)

    AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:

    ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载

    ReadOnlyMany(ROX):只读权限,可以被多个节点挂载

    ReadWriteMany(RWX):读写权限,可以被多个节点挂载

    (3) persistentVolumeReclaimPolicy(回收策略)

    我这里指定的 PV 的回收策略为 Recycle,目前 PV 支持的策略有三种:

    Retain(保留)- 保留数据,需要管理员手工清理数据

    Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevoluem/*

    Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。

    (4) 状态:

    一个 PV 的生命周期中,可能会处于4种不同的阶段:

    Available(可用):表示可用状态,还未被任何 PVC 绑定

    Bound(已绑定):表示 PVC 已经被 PVC 绑定

    Released(已释放):PVC 被删除,但是资源还未被集群重新声明

    Failed(失败): 表示该 PV 的自动回收失败

    2. 创建pvc 使用上面的pv

    1. 创建pvc.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-dep1
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            volumeMounts:
            - name: wwwroot
              mountPath: /usr/share/nginx/html
            ports:
            - containerPort: 80
          volumes:
          - name: wwwroot
            persistentVolumeClaim:
              claimName: my-pvc
    
    ---
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi

    2. 创建并查看

    [root@k8smaster1 nfs]# kubectl apply -f pvc.yaml 
    deployment.apps/nginx-dep1 created
    persistentvolumeclaim/my-pvc created
    [root@k8smaster1 nfs]# kubectl get pvc -o wide
    NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
    my-pvc   Bound    my-pv    5Gi        RWX                           60s   Filesystem
    [root@k8smaster1 nfs]# kubectl get pods -o wide
    NAME                                            READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
    nginx-dep1-58b7bf955f-4jhbq                     1/1     Running   0          75s     10.244.2.112     k8snode2     <none>           <none>
    nginx-dep1-58b7bf955f-m69dm                     1/1     Running   0          75s     10.244.2.110     k8snode2     <none>           <none>
    nginx-dep1-58b7bf955f-qh6pg                     1/1     Running   0          75s     10.244.2.111     k8snode2     <none>           <none>
    nginx-f89759699-vkf7d                           1/1     Running   3          4d16h   10.244.1.106     k8snode1     <none>           <none>
    tomcat-58767d5b5-f5qwj                          1/1     Running   2          4d15h   10.244.1.103     k8snode1     <none>           <none>
    weave-scope-agent-ui-kbq7b                      1/1     Running   2          45h     192.168.13.105   k8snode2     <none>           <none>
    weave-scope-agent-ui-tg5q4                      1/1     Running   2          45h     192.168.13.103   k8smaster1   <none>           <none>
    weave-scope-agent-ui-xwh2b                      1/1     Running   2          45h     192.168.13.104   k8snode1     <none>           <none>
    weave-scope-cluster-agent-ui-7498b8d4f4-zdlk7   1/1     Running   2          45h     10.244.1.104     k8snode1     <none>           <none>
    weave-scope-frontend-ui-649c7dcd5d-7gb9s        1/1     Running   2          45h     10.244.1.107     k8snode1     <none>           <none>
    web3-6c6557674d-6wlh4                           1/1     Running   0          14h     10.244.1.110     k8snode1     <none>           <none>
    [root@k8smaster1 nfs]# 

    3. 随便进入一个pod的第一个容器,然后创建文件

    [root@k8smaster1 nfs]# kubectl exec -it nginx-dep1-58b7bf955f-4jhbq -- bash
    root@nginx-dep1-58b7bf955f-4jhbq:/# echo "111222" >> /usr/share/nginx/html/1.txt
    root@nginx-dep1-58b7bf955f-4jhbq:/# exit
    exit

    4. 到nfs 服务器查看与其他容器查看

    (1) nfs 服务器查看

    [root@k8smaster2 nfs]# ls
    104.txt  1.txt  index.html
    [root@k8smaster2 nfs]# cat 1.txt 
    111222

    (2) 进入其他pod 的第一个容器查看

    [root@k8smaster1 nfs]# kubectl exec -it nginx-dep1-58b7bf955f-qh6pg -- bash
    root@nginx-dep1-58b7bf955f-qh6pg:/# ls /usr/share/nginx/html/
    1.txt  104.txt  index.html

      至此简单实现了基于nfs 和 pv、pvc 的持久化存储。

    5. storageclass

      PV 可以理解为静态的,就是要使用的一个 PVC 的话就必须手动去创建一个 PV,这种方式在很大程度上并不能满足我们的需求,比如我们有一个应用需要对存储的并发度要求比较高,而另外一个应用对读写速度又要求比较高,特别是对于 StatefulSet 类型的应用简单的来使用静态的 PV 就很不合适了,这种情况下我们就需要用到动态 PV,也就 StorageClass。

    1. 创建storageclass

      要使用 StorageClass,我们就得安装对应的自动配置程序,比如我们这里存储后端使用的是 nfs,那么我们就需要使用到一个 nfs-client 的自动配置程序,我们也叫它 Provisioner,这个程序使用我们已经配置好的 nfs 服务器,来自动创建持久卷,也就是自动帮我们创建 PV。

      自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中,而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上。

      当然在部署nfs-client之前,我们需要先成功安装上 nfs 服务器,服务地址是192.168.13.106,共享数据目录是/data/nfs/,然后接下来我们部署 nfs-client 即可,我们也可以直接参考 nfs-client 的文档:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client ,进行安装即可。

    第一步: 配置 Deployment,将里面的对应的参数替换成我们自己的 nfs 配置(nfs-client.yml)

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 192.168.13.106
                - name: NFS_PATH
                  value: /data/nfs
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.13.106
                path: /data/nfs
    View Code

    第二步:使用一个名为 nfs-client-provisioner 的serviceAccount,也需要创建一个 sa,然后绑定上对应的权限:(nfs-client-sa.yml)

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
    
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["list", "watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    View Code

      我们这里新建的一个名为 nfs-client-provisioner 的ServiceAccount,然后绑定了一个名为 nfs-client-provisioner-runner 的ClusterRole,而该ClusterRole声明了一些权限,其中就包括对persistentvolumes的增、删、改、查等权限,所以我们可以利用该ServiceAccount来自动创建 PV。

    第三步: nfs-client 的 Deployment 声明完成后,我们就可以来创建一个StorageClass对象了:(nfs-client-class.yml)

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: course-nfs-storage
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    View Code

      声明了一个名为 course-nfs-storage 的StorageClass对象,注意下面的provisioner对应的值一定要和上面的Deployment下面的 PROVISIONER_NAME 这个环境变量的值一样。

    接下来使用kubectl apply -f XXX.yml 创建上面资源并且查看相关资源:

    [root@k8smaster1 storageclass]# kubectl get pods,deployments -o wide | grep nfs 
    pod/nfs-client-provisioner-6888b56547-7ts79         1/1     Running   0          101m   10.244.2.118     k8snode2     <none>           <none>
    deployment.apps/nfs-client-provisioner         1/1     1            1           3h26m   nfs-client-provisioner      quay.io/external_storage/nfs-client-provisioner:latest   app=nfs-client-provisioner
    [root@k8smaster1 storageclass]# kubectl get storageclass -o wide
    NAME                 PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  44m

    也可以创建的时候设置为默认的storageclass

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: course-nfs-storage
      annotations:
            storageclass.kubernetes.io/is-default-class: "true"
    provisioner: fuseim.pri/ifs

    2. 新建

    1. 首先创建一个 PVC 对象, test-pvc.yml

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Mi

    执行创建:

    [root@k8smaster1 storageclass]# kubectl apply -f test-pvc.yml 
    persistentvolumeclaim/test-pvc created
    [root@k8smaster1 storageclass]# kubectl get pvc -o wide
    NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
    test-pvc   Pending                                                     2s    Filesystem

      声明了一个 PVC 对象,采用 ReadWriteMany 的访问模式,请求 1Mi 的空间,但是我们可以看到上面的 PVC 文件我们没有标识出任何和 StorageClass 相关联的信息,那么如果我们现在直接创建这个 PVC 对象不会自动绑定上合适的 PV 对象,我们这里有两种方法可以来利用上面我们创建的 StorageClass 对象来自动帮我们创建一个合适的 PV

    方法1:我们可以设置这个 course-nfs-storage 的 StorageClass 为 Kubernetes 的默认存储后端

    kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

    查看默认的storageclass以及取消默认:

    [root@k8smaster1 storageclass]# kubectl get storageclass
    NAME                           PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    course-nfs-storage (default)   fuseim.pri/ifs   Delete          Immediate           false                  122m
    [root@k8smaster1 storageclass]# kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
    storageclass.storage.k8s.io/course-nfs-storage patched
    [root@k8smaster1 storageclass]# kubectl get storageclass
    NAME                 PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  123m

    方法二:在这个 PVC 对象中添加一个声明 StorageClass 对象的标识,这里我们可以利用一个 annotations 属性来标识,如下: (推荐这种)

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test-pvc
      annotations:
        volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Mi

    创建并且查看:

    [root@k8smaster1 storageclass]# kubectl apply -f test-pvc.yml 
    persistentvolumeclaim/test-pvc created
    [root@k8smaster1 storageclass]# kubectl get pvc
    NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
    test-pvc   Bound    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            course-nfs-storage   5s
    [root@k8smaster1 storageclass]# kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS         REASON   AGE
    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound    default/test-pvc   course-nfs-storage            53s
    [root@k8smaster1 storageclass]# kubectl describe pv pvc-97bce597-0788-49a1-be6d-5a938363797b
    Name:            pvc-97bce597-0788-49a1-be6d-5a938363797b
    Labels:          <none>
    Annotations:     pv.kubernetes.io/provisioned-by: fuseim.pri/ifs
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    course-nfs-storage
    Status:          Bound
    Claim:           default/test-pvc
    Reclaim Policy:  Delete
    Access Modes:    RWX
    VolumeMode:      Filesystem
    Capacity:        1Mi
    Node Affinity:   <none>
    Message:         
    Source:
        Type:      NFS (an NFS mount that lasts the lifetime of a pod)
        Server:    192.168.13.106
        Path:      /data/nfs/default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b
        ReadOnly:  false
    Events:        <none>
    [root@k8smaster1 storageclass]# kubectl describe pvc test-pvc
    Name:          test-pvc
    Namespace:     default
    StorageClass:  course-nfs-storage
    Status:        Bound
    Volume:        pvc-97bce597-0788-49a1-be6d-5a938363797b
    Labels:        <none>
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
                   volume.beta.kubernetes.io/storage-class: course-nfs-storage
                   volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      1Mi
    Access Modes:  RWX
    VolumeMode:    Filesystem
    Mounted By:    <none>
    Events:
      Type    Reason                 Age                    From                                                                                         Message
      ----    ------                 ----                   ----                                                                                         -------
      Normal  ExternalProvisioning   5m18s (x2 over 5m18s)  persistentvolume-controller                                                                  waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
      Normal  Provisioning           5m18s                  fuseim.pri/ifs_nfs-client-provisioner-6888b56547-7ts79_6aa1d177-8966-11ec-b368-9e5ccaa198de  External provisioner is provisioning volume for claim "default/test-pvc"
      Normal  ProvisioningSucceeded  5m18s                  fuseim.pri/ifs_nfs-client-provisioner-6888b56547-7ts79_6aa1d177-8966-11ec-b368-9e5ccaa198de  Successfully provisioned volume pvc-97bce597-0788-49a1-be6d-5a938363797b

      可以看到: 一个名为 test-pvc 的 PVC 对象创建成功了,状态已经是 Bound 了,也产生了一个对应的 VOLUME 对象,最重要的一栏是 STORAGECLASS,现在也有值了,就是我们刚刚创建的 StorageClass 对象 course-nfs-storage。 并且也自动创建了一个pv 对象,访问模式是 RWX,回收策略是 Delete,这个是通过StorageClass 对象自动创建的。

     3. 测试

    1. 新建test-pvc-pod.yml

    kind: Pod
    apiVersion: v1
    metadata:
      name: test-pod
    spec:
      containers:
      - name: test-pod
        image: busybox
        imagePullPolicy: IfNotPresent
        command:
        - "/bin/sh"
        args:
        - "-c"
        - "touch /mnt/SUCCESS && exit 0 || exit 1"
        volumeMounts:
        - name: nfs-pvc
          mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
      - name: nfs-pvc
        persistentVolumeClaim:
          claimName: test-pvc
    View Code

      上面这个 Pod 非常简单,就是用一个 busybox 容器(这个容器集成了常见的linux 命令),在 /mnt 目录下面新建一个 SUCCESS 的文件,然后把 /mnt 目录挂载到上面我们新建的 test-pvc 这个资源对象上面了,要验证很简单,只需要去查看下我们 nfs 服务器上面的共享数据目录下面是否有 SUCCESS 这个文件即可

    2. 创建后查看

    [root@k8smaster1 storageclass]# kubectl apply -f test-pvc-pod.yml 
    pod/test-pod created
    [root@k8smaster1 storageclass]# kubectl get pods -o wide | grep test-
    test-pod                                        0/1     Completed   0          3m48s   10.244.2.119     k8snode2     <none>           <none>

    3. 到nfs 服务器节点查看

    [root@k8smaster2 default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b]# pwd
    /data/nfs/default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b
    [root@k8smaster2 default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b]# ll
    total 0
    -rw-r--r-- 1 root root 0 Feb  9 02:36 SUCCESS

      可以看到nfs 服务器挂载的目录下面有名字很长的文件夹,这个文件夹的命名满足规则:${namespace}-${pvcName}-${pvName}

    4. 常用方法

    使用 StorageClass 更多的是 StatefulSet 类型的服务,StatefulSet 类型的服务我们也可以通过一个 volumeClaimTemplates 属性来直接使用 StorageClass

    1.  test-statefulset-nfs.yml

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nfs-web
    spec:
      serviceName: "nginx"
      replicas: 3
      selector:
        matchLabels:
          app: nfs-web
      template:
        metadata:
          labels:
            app: nfs-web
        spec:
          terminationGracePeriodSeconds: 10
          containers:
          - name: nginx
            image: nginx:1.7.9
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:
      - metadata:
          name: www
          annotations:
            volume.beta.kubernetes.io/storage-class: course-nfs-storage
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 1Gi

      实际上 volumeClaimTemplates 下面就是一个 PVC 对象的模板,就类似于我们这里 StatefulSet 下面的 template,实际上就是一个 Pod 的模板,我们不单独创建成 PVC 对象,而用这种模板就可以动态的去创建了对象了,这种方式在 StatefulSet 类型的服务下面使用得非常多。

    2. 创建并查看

    [root@k8smaster1 storageclass]# kubectl apply -f test-statefulset-nfs.yml 
    pstatefulset.apps/nfs-web created
    [root@k8smaster1 storageclass]# kubectl get pods | grep nfs-web
    nfs-web-0                                       1/1     Running     0          2m42s
    nfs-web-1                                       1/1     Running     0          115s
    nfs-web-2                                       1/1     Running     0          109s
    [root@k8smaster1 storageclass]# kubectl get pvc
    NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
    test-pvc        Bound    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            course-nfs-storage   50m
    www-nfs-web-0   Bound    pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            course-nfs-storage   2m57s
    www-nfs-web-1   Bound    pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            course-nfs-storage   2m10s
    www-nfs-web-2   Bound    pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            course-nfs-storage   2m4s
    [root@k8smaster1 storageclass]# kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS         REASON   AGE
    pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            Delete           Bound    default/www-nfs-web-2   course-nfs-storage            2m16s
    pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            Delete           Bound    default/www-nfs-web-1   course-nfs-storage            2m22s
    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound    default/test-pvc        course-nfs-storage            50m
    pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            Delete           Bound    default/www-nfs-web-0   course-nfs-storage            3m9s

    3. 到nfs 服务器查看共享目录如下

    [root@k8smaster2 nfs]# pwd
    /data/nfs
    [root@k8smaster2 nfs]# ll
    total 4
    drwxrwxrwx 2 root root 21 Feb  9 02:36 default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b
    drwxrwxrwx 2 root root  6 Feb  9 02:49 default-www-nfs-web-0-pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e
    drwxrwxrwx 2 root root  6 Feb  9 02:50 default-www-nfs-web-1-pvc-7fdeb85f-481e-48c1-9734-284cce8014fb
    drwxrwxrwx 2 root root  6 Feb  9 02:50 default-www-nfs-web-2-pvc-7810f38b-2779-49e3-84f2-4b56e16df419
    -rw-r--r-- 1 root root  4 Feb  8 21:22 test.txt
    [root@k8smaster2 nfs]# 

     补充: StorageClass 相当于一个创建 PV 的模板,用户通过 PVC 申请存储卷,StorageClass 通过模板自动创建 PV,然后和 PVC 进行绑定。

      在有storageclass 环境的k8s 中,可以通过如下方式创建storageclass 以及 pvc、pv

    1. yml 内容如下

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: sc2
    provisioner: fuseim.pri/ifs
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test-pvc-sc
      annotations:
        volume.beta.kubernetes.io/storage-class: "sc2"
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Mi

    2. 执行创建查看资源如下

    [root@k8smaster1 storageclass]# kubectl get sc
    NAME                 PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  169m
    sc2                  fuseim.pri/ifs   Delete          Immediate           false                  9m28s
    [root@k8smaster1 storageclass]# kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS         REASON   AGE
    pvc-1d0c10d4-c7f7-433f-8143-78b11fd8fe58   1Mi        RWX            Delete           Bound    default/test-pvc-sc     sc2                           9m40s
    pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            Delete           Bound    default/www-nfs-web-2   course-nfs-storage            65m
    pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            Delete           Bound    default/www-nfs-web-1   course-nfs-storage            65m
    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound    default/test-pvc        course-nfs-storage            113m
    pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            Delete           Bound    default/www-nfs-web-0   course-nfs-storage            66m
    [root@k8smaster1 storageclass]# kubectl get pvc
    NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
    test-pvc        Bound    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            course-nfs-storage   113m
    test-pvc-sc     Bound    pvc-1d0c10d4-c7f7-433f-8143-78b11fd8fe58   1Mi        RWX            sc2                  9m48s
    www-nfs-web-0   Bound    pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            course-nfs-storage   66m
    www-nfs-web-1   Bound    pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            course-nfs-storage   65m
    www-nfs-web-2   Bound    pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            course-nfs-storage   65m

    补充: 重启后发现nfs 无效,解决办法:

    nfs 服务器需要将nfs 服务启动,并且设置为开机自启动;nfs 客户端将nfs-utils 服务启动并且设置为开机自启动。

    补充: 设置了默认的storageclass之后,如果不指定会使用默认的

    1. yml 如下:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-claim
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Mi

    2. 创建后查看: 使用的默认的sc

    [root@k8smaster01 storageclass]# kubectl apply -f test-default-sc.yml 
    persistentvolumeclaim/test-claim created
    [root@k8smaster01 storageclass]# kubectl get pvc
    NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
    test-claim   Bound    pvc-9f784512-e537-468c-9e3c-2b084776a368   1Mi        RWX            course-nfs-storage   4s

    补充: 核心概念解释

    PV 的全称是:PersistentVolume(持久化卷),是对底层的共享存储的一种抽象,PV 由管理员进行创建和配置,它和具体的底层的共享存储技术的实现方式有关,比如 Ceph、GlusterFS、NFS 等,都是通过插件机制完成与共享存储的对接。

    PVC 的全称是:PersistentVolumeClaim(持久化卷声明),PVC 是用户存储的一种声明,PVC 和 Pod 比较类似,Pod 消耗的是节点,PVC 消耗的是 PV 资源,Pod 可以请求 CPU 和内存,而 PVC 可以请求特定的存储空间和访问模式。对

    StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,用户根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了。

  • 相关阅读:
    0218 scikitlearn库之k*邻算法
    087 Python文件的两种用途
    0217 kd树
    Java8的十大新特性
    Java8的十大新特性
    Spring加载Bean的流程(源码分析)
    Spring加载Bean的流程(源码分析)
    线程池原理(JDK1.8)
    JS原生Ajax和jQuery的Ajax与代码示例ok
    JS原生Ajax和jQuery的Ajax与代码示例ok
  • 原文地址:https://www.cnblogs.com/qlqwjy/p/15817294.html
Copyright © 2020-2023  润新知