简介:
Kubernetes - 存储部分
ConfigMap 的创建
Pod 中使用 ConfigMap
通过数据卷插件使用ConfigMap
Secret
Kubernetes-volume
hostPath:主机节点的文件系统中的文件或目录挂载到(容器)集群中
PersistentVolume (PV)
PersistentVolumeClaim (PVC)
Kubernetes - 存储部分
$ ls docs/user-guide/configmap/kubectl/ game.properties ui.properties $ cat docs/user-guide/configmap/kubectl/game.properties
enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 $ cat docs/user-guide/configmap/kubectl/ui.properties color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice $ kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
--from-file 指定在目录下的所有文件都会被用在 ConfigMap 里面创建一个键值对,键的名字就是文件名,值就是文件的内容。
查看创建信息
kubectl get cm
kubectl get cm game-config -o yaml
$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties
$ kubectl get configmaps game-config-2 -o yaml
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
$ kubectl get configmaps special-config -o yaml
$ mkdir env $ cd env $ vim env.yaml
apiVersion: v1 kind: ConfigMap metadata: name: special-config #configmap名称 namespace: default data: #值 special.how: very #key special.type: charm #value
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
kubectl apply -f env.yaml
将上面的环境变量注入到pod中
apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: hub.atguigu.com/library/myapp:v1 command: [ "/bin/sh", "-c", "env" ] env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config #cm名称 key: special.how #导入的key名,将这个key的值赋给上面的SPECIAL_LEVEL_KEY - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type #导入的key名,将这个key的值赋给上面的SPECIAL_TYPE_KEY envFrom: #导入上面env-config的环境变量 - configMapRef: name: env-config restartPolicy: Never
创建pod
查看运行结果(日志)
kubectl log dapi-test-pod
可以看到3个环境变量已修改
apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm
$ vim pod1.yaml
apiVersion: v1 kind: Pod metadata: name: dapi-test-pod66 spec: containers: - name: test-container image: hub.atguigu.com/library/myapp:v1 command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never
$ kubectl create -f pod1.yaml
查看日志
apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm
apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: hub.atguigu.com/library/myapp:v1 command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ] #键就是文件名,键值就是文件内容 volumeMounts: - name: config-volume #挂载下面的vm mountPath: /etc/config #挂载路径,会将下面vm中cm的key值挂载为文件名,键值就会成为文件内容 volumes: - name: config-volume #vm名称 configMap: #cm名称 name: special-config restartPolicy: Never
$ vim 111.yaml
apiVersion: v1 kind: ConfigMap metadata: name: log-config namespace: default data: log_level: INFO --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: hub.atguigu.com/library/myapp:v1 ports: - containerPort: 80 volumeMounts: - name: config-volume mountPath: /etc/config #把log-config挂载到这个目录中 volumes: - name: config-volume configMap: name: log-config
$ kubectl exec pod名称 -it -- cat /etc/config/log_level INFO
$ kubectl edit configmap log-config
$ kubectl exec pod名称 -it -- cat /etc/config/log_level DEBUG
$ kubectl exec `kubectl get pods -l run=my-nginx -o=name|cut -d "/" -f2` cat /tmp/log_level DEBUG
可以看出:只要修改configmap文件,pod里面文件内容就会自动更新键值
$ kubectl patch deployment my-nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "20190411" }}}}}'
Secret
$ kubectl run nginx --image nginx
deployment "nginx" created $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-3137573019-md1u2 1/1 Running 0 13s $ kubectl exec nginx-3137573019-md1u2 ls /run/secrets/kubernetes.io/serviceaccount ca.crt namespace token
$ echo -n "admin" | base64 YWRtaW4= $ echo -n "1f2d1e2e67df" | base64 MWYyZDFlMmU2N2Rm
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: password: MWYyZDFlMmU2N2Rm username: YWRtaW4=
$ kubectl apply -f secrets.yaml
解密特别简单
echo -n "YWRtaW4=" |base64 -d
apiVersion: v1 kind: Pod metadata: labels: name: seret-test name: seret-test spec: volumes: # 创建卷 - name: secrets #卷名 secret: secretName: mysecret #上面创建的secret名 containers: - image: hub.atguigu.com/library/myapp:v1 name: db volumeMounts: - name: secrets #挂载上面声明的secrets卷名 mountPath: "/etc/secrets" #挂载路径 readOnly: true
注意:虽然保存的时候是加密的,但是它在使用时会自动解密,上面/etc/secrets目录下的就是解密后文件
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: pod-deployment spec: replicas: 2 template: metadata: labels: app: pod-deployment spec: containers: - name: pod-1 image: hub.atguigu.com/library/myapp:v1 ports: - containerPort: 80 env: - name: TEST_USER valueFrom: secretKeyRef: name: mysecret key: username #会将这个username的值赋给上面的TEST_USER - name: TEST_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password #会将这个password的值赋给上面的TEST_PASSWORD
kubernetes.io/dockerconfigjson(例:harbor私有仓库)
登录后推送镜像(成功)
好了,镜像准备完成
开始实验
退出harbor登录
没认证前--测试拉取私有仓库镜像(拉取失败)
使用 Kuberctl 创建 docker registry 认证的 secret
##创建类型:docker-registry 名称:myregistrykey
$ kubectl create secret docker-registry myregistrykey
--docker-server=DOCKER_REGISTRY_SERVER #docker私有仓库地址即harbor仓库地址
--docker-username=DOCKER_USER #用户名
--docker-password=DOCKER_PASSWORD #密码
--docker-email=DOCKER_EMAIL #邮箱
secret "myregistrykey" created.
$ vim pod.yaml
apiVersion: v1 kind: Pod metadata: name: foo spec: containers: - name: foo image: hub.atguigu.com/test/myapp:v2 #私有仓库镜像 imagePullSecrets: #认证 - name: myregistrykey
pod状态是Running说明pod镜像下载成功!
Kubernetes-volume
$ vim em.yaml
apiVersion: v1 kind: Pod metadata: name: test-pd1 spec: containers: - image: wangyanglinux/myapp:v2 name: test-container volumeMounts: - mountPath: /cache #挂载目录,将下面的卷挂载到本容器此目录下 name: cache-volume #挂载的卷名
- name: liveness-exec-cntainer #第二个容器 image: busybx imagePullPolicy:IfNotPresent command: ["/bin/sh","-c","sleep 6000s"] volumeMounts: - mountPath: /test #挂载目录,将下面的卷挂载到本容器此目录下 name: cache-volume #挂载的卷名 volumes: - name: cache-volume emptyDir: {} #空卷
$ kubectl create -f em.yaml
进入第一个容器挂载目录并编辑
进入第二个容器挂载目录并编辑
回到第一个容器查看
可得出结论:两个容器不同路径挂载目录也是可以共享一个卷目录的。
hostPath
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd #挂载目录 name: test-volume volumes: - name: test-volume hostPath: # 主机上的目录 path: /data # this field is optional type: Directory #给定的路径下必须存在目录,否则报错
在本机创建目录 /data
进入挂载容器并编辑
在本地data目录查看并编辑
再进入容器查看
可以看到本地目录文件和容器挂载卷实现了资源共享
Kubernetes-PersistentVolume
apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi 存储大小 volumeMode: Filesystem #文件类型 accessModes: #访问策略 - ReadWriteOnce #只能同时一个人读写 persistentVolumeReclaimPolicy: Recycle #回收策略 storageClassName: slow #存储类的名称,划分存储类的非常重要的一个指标 mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp #挂载目录 server: 172.17.0.2 #挂载服务器IP
回收策略
持久化演示实验 - NFS
yum install -y nfs-common nfs-utils rpcbind mkdir /nfs
mkdir /nfs{1..3} chmod 666 /nfs* chown nfsnobody /nfs* cat /etc/exports /nfs *(rw,no_root_squash,no_all_squash,sync) /nfs1 *(rw,no_root_squash,no_all_squash,sync) /nfs2 *(rw,no_root_squash,no_all_squash,sync) /nfs3 *(rw,no_root_squash,no_all_squash,sync) systemctl start rpcbind systemctl start nfs
其他节点安装nfs客户端
yum -y install nfs-utils rpcbind mkdir /test #测试挂载服务端nfs共享目录 mount -t nfs 192.168.66.100:/nfs /test cd /test vim 111.txt #测试完后卸载 umount /test rm -rf /test
#上面操作是为了测试nfs能否正常使用
$ vim pv.yaml (部署多个pv,可以看出pv和pvc之间的关系) apiVersion: v1 kind: PersistentVolume metadata: name: nfspv1 spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: nfs nfs: path: /nfs server: 192.168.66.100 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv2 spec: capacity: storage: 5Gi accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: nfs nfs: path: /nfs1 server: 192.168.66.100 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv3 spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: slow nfs: path: /nfs2 server: 192.168.66.100 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv4 spec: capacity: storage: 1Gi accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: nfs nfs: path: /nfs3 server: 192.168.66.100 $ kubectl create -f pv.yaml $ kubectl get pv
$ vim pod.yaml
apiVersion: v1 kind: Service #svc metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None #svc无头服务,不需要链接到对应的IP/端口 selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet #注意:如果要建立一个StatefulSet控制器的话,必须先建立一个无头服务,StatefulSet的svc一定是无头服务才行 metadata: name: web spec: selector: matchLabels: #匹配标签,当app=nginx时匹配 app: nginx serviceName: "nginx" #指定上面的svc无头服务名称 replicas: 3 template: metadata: labels: #标签信息 app: nginx #app=nginx (key=value) spec: containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www #挂载下面声明的卷名 mountPath: /usr/share/nginx/html #挂载的容器目录(共享目录) volumeClaimTemplates: #卷声明模板 - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "nfs" resources: requests: storage: 1Gi #卷大小满足条件
$ kubectl create -f pod.yaml
PVC挂载条件1.访问模式为:ReadWriteOnce 2.存储对象名为:nfs 3.卷大小>=1GB,如果前两个匹配,优先选择满足条件容量小的匹配
重新修改PV
删除旧的PV
kubectl delete pv nfspv3
kubectl delete pv nfspv4
apiVersion: v1 kind: PersistentVolume metadata: name: nfspv3 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: nfs nfs: path: /nfs2 server: 192.168.66.100 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv4 spec: capacity: storage: 50Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: nfs nfs: path: /nfs3 server: 192.168.66.100
可以看到都完成了绑定,每个pod都有一个请求模板,会创建一个PVC,PVC会去跟PV进行匹配,匹配成功后就会与之绑定
测试一:
查看第一个PVC绑定的PV详细信息
kubectl describe pv nfspv1
进入相应的nfs服务器(192.168.66.100)
cd /nfs
vim index.html
aaaaaaaa
chmod 777 index.html
进入其他节点服务器
测试二:
查看第二个PVC绑定的PV详细信息
kubectl describe pv nfspv3
进入相应的nfs服务器(192.168.66.100)
cd /nfs2 vim index.html bbbbbbbbbbbbbbbbb chmod 777 index.html
进入其他节点服务器访问
第三个同理略 ,可以发现三个都完成了绑定关系,且数据一致了
测试删除pod后,数据的持久化
删除pod(可以看到重新生成了一个同名pod,且IP地址也变了(但是访问名称一致不会变))
再次访问查看数据(数据一致,数据不会丢失,这就是StatefulSet的一些特性)