资源创建详解
目录
一:Pod及常用参数
1.简介
2.模板
3.删除pod
示例流程如下:
- 用户发送删除pod的命令,默认宽限期是30秒;
- 在Pod超过该宽限期后API server就会更新Pod的状态为“dead”;
- 在客户端命令行上显示的Pod状态为“terminating”;
- 跟第三步同时,当kubelet发现pod被标记为“terminating”状态时,开始停止pod进程:
- 如果在pod中定义了preStop hook,在停止pod前会被调用。如果在宽限期过后,preStop hook依然在运行,第二步会再增加2秒的宽限期;
- 向Pod中的进程发送TERM信号;
- 跟第三步同时,该Pod将从该service的端点列表中删除,不再是replication controller的一部分。关闭的慢的pod将继续处理load balancer转发的流量;
- 过了宽限期后,将向Pod中依然运行的进程发送SIGKILL信号而杀掉进程。
- Kublete会在API server中完成Pod的的删除,通过将优雅周期设置为0(立即删除)。Pod在API中消失,并且在客户端也不可见。
3.1.默认删除
默认删除,会按照上线的流程,等待宽限期30s
kubectl delete POD --namespace=xxx
3.2.强制删除
设置宽限期为0,会立即删除,没有宽限期
kubectl delete POD --namespace=xxx --force --grace-period=0
4.设置Pod主机名
template.spec.hostname:
pod_name 设置pod的主机名
5.镜像拉取策略(ImagePullPolicy)
ImagePullPolicy:
- Always: 不管镜像是否存在,都会拉取
- Never:不管镜像是否存在,都不会拉取
- IfNotPresent:只有当镜像不存在的时候才会进行拉取
注意:
- 默认为
IfNotPresent
,但:latest
标签的镜像默认为Always
。 - 拉取镜像时docker会进行校验,如果镜像中的MD5码没有变,则不会拉取镜像数据。
- 生产环境中应该尽量避免使用
:latest
标签,而开发环境中可以借助:latest
标签自动拉取最新的镜像。
二:RC
1.简介
2.模板
apiVersion: v1
kind: ReplicationController # 定义资源类型
metadata:
name: zabbix-db
namespace: zabbix
spec:
replicas: 1
selector:
app: zabbix-db
template:
metadata:
name: zabbix-db
labels:
app: zabbix-db
spec:
terminationGracePeriodSeconds: 30 # 容器平滑退出时间,默认30s
hostname: zabbix-db # 设置容器的主机名
containers:
- name: zabbix-db
image: mysql:5.7.22
env:
- name: MYSQL_DATABASE
value: zabbix
- name: MYSQL_USER
value: zabbix
- name: MYSQL_PASSWORD
value: zabbix
- name: MYSQL_ROOT_PASSWORD
value: Abc123@
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
readOnly: false
name: zabbix-database
volumes:
- name: zabbix-database
nfs:
server: 172.30.80.222
path: "/data/zabbix/zabbix_db/mysql"
三:Deployment
1.简介
Deployment为Pod和Replica Set(升级版的 Replication Controller)提供声明式更新。比于RC,Deployment直接使用kubectl edit deployment/deploymentName 或者kubectl set方法就可以直接升级(原理是Pod的template发生变化,例如更新label、更新镜像版本等操作会触发Deployment的滚动升级)。
2.模板
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
minReadySeconds: 30 # 滚动升级时,容器准备就绪时间最少为30s
replicas: 5
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate: # 当type为RollingUpdate时,才会进行设置
maxSurge: 25% # 当定义为25%时,容器会先新建百分之25的pod,然后开始滚动升级
maxUnavailable: 25% # 每次升级的百分比,也可以是绝对数(5),默认值25%
type: RollingUpdate # 滚动升级方式,有Recreate:立马关闭所有pod进行升级,RollingUpdate:采用百分比的方式滚动升级
template: # 模板
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
resources: # 资源限制
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
env: # 设置变量
- name: DB_SERVER_HOST
value: zabbix-db-server
- name: MYSQL_DATABASE
value: zabbix
- name: MYSQL_USER
value: zabbix
- name: MYSQL_PASSWORD
value: zabbix
- name: MYSQL_ROOT_PASSWORD
value: Abc123@
- name: ZBX_HISTORYSTORAGEURL
value: http://192.168.2.171:9200
- name: ZBX_HISTORYSTORAGETYPES
value: uint,dbl,str,log,text
ports:
- containerPort: 80
livenessProbe: #livenessProbe是K8S认为该pod是存活的,不存在则需要kill掉,然后再新启动一个,以达到RS指定的个数。
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe: #readinessProbe是K8S认为该pod是启动成功的,这里根据每个应用的特性,自己去判断,可以执行command,也可以进行httpGet。
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
successThreshold: 1
restartPolicy: Always # 启动失败时,会重试启动
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
sessionAffinity: ClientIP
selector:
app: nginx
ports:
# 将容器的80端口映射到master主机的8888端口
- port: 80 # pod端口
nodePort: 8888 # 宿主机上的端口
四:HPA
1.简介
Horizontal Pod Autoscaler根据观察到的CPU利用率自动调整复制控制器,部署或副本集中的容器数量(或者,通过 自定义指标 支持,根据其他一些应用程序提供的指标)。请注意,Horizontal Pod Autoscaling不适用于无法缩放的对象,例如DaemonSet。
2.模板
创建一个pod,必须添加资源请求和限制参数
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
minReadySeconds: 30
replicas: 5
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
resources:
requests: # 在使用HPA自动扩展时,必须使用资源请求和资源限制
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
sessionAffinity: ClientIP
selector:
app: nginx
ports:
# 将容器的80端口映射到master主机的8888端口
- port: 80
nodePort: 8888
创建HPA
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler # 指定对象类型
metadata:
name: nginx-hpa # 名字
labels: # 标签
app: hpa
version: v0.0.1
spec:
scaleTargetRef:
apiVersion: v1
kind: Deployment
name: nginx # 创建Deployment时,指定的名字
minReplicas: 1 # 最小pod
maxReplicas: 10 # 最大pod
targetCPUUtilizationPercentage: 70 # CPU用到70%,自动扩展一个pod
查看
# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx 0% / 70% 1 10 1 4d
查看详情
# kubectl describe hpa/nginx-hpa
Name: nginx-hpa
Namespace: default
Labels: app=hpa
version=v0.0.1
Annotations: <none>
CreationTimestamp: Thu, 02 Aug 2018 16:39:55 +0800
Reference: Deployment/nginx
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 70%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to succesfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooFewReplicas the desired replica count was less than the minimum replica count
Events: <none>
五:StatefulSet
1.简介
2.模板
2.1.基础环境介绍
StatefulSet Name | Service Name | |
---|---|---|
apiVersion: v1
kind: Service
metadata:
name: zoo01
labels:
app: zoo01
spec:
ports:
- port: 2888
name: leader-listen
- port: 3888
name: leader-vote
clusterIP: None
selector:
app: zoo01
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo01
labels:
app: zoo01
spec:
serviceName: "zoo01-service"
replicas: 1
template:
metadata:
creationTimestamp: null
labels:
app: zoo01
spec:
terminationGracePeriodSeconds: 30
hostname: zoo01
containers:
- name: zoo01
image: zookeeper:3.5
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: "server.1=zoo01:2888:3888 server.2=zoo02:2888:3888 server.3=zoo03:2888:3888"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
restartPolicy: Always
六:PV和PVC
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: gitlab-pv
namespace: dev
spec:
capacity:
storage: 10Gi # 大小
accessModes:
- ReadWriteOnce # 读写模式
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions: # 挂载选项
- hard
- nfsvers=4.1
nfs: # nfs服务器地址
path: /data/gitlab
server: 172.30.80.222
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gitlab-pvc
namespace: dev
labels:
type: nfs
spec:
accessModes:
- ReadWriteOnce # PV和PVC的绑定关系,主要依靠读写模式和存储大小
storageClassName: slow
resources:
requests:
storage: 10Gi
selector:
matchLabels:
name: gitlab-pv
Pod
八:扩展
8.1.Pod调度到指定的Node
介绍:
Pod.spec.nodeSelector通过kubernetes的label-selector机制选择节点,由调度器调度策略匹配label,而后调度Pod到目标节点,该匹配规则属于强制约束。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zabbix-server
namespace: zabbix
spec:
replicas: 1
selector:
matchLabels:
app: zabbix-server
template:
metadata:
name: zabbix-server
labels:
app: zabbix-server
spec:
nodeSelector: # 基于node的lable进行调度
kubernetes.io/hostname: 172.30.80.220 # 指定主机的lable标签
containers:
- name: zabbix-server
image: zabbix/zabbix-server-mysql:latest
env:
- name: DB_SERVER_HOST
value: zabbix-db-server
- name: MYSQL_DATABASE
value: zabbix
- name: MYSQL_USER
value: zabbix
- name: MYSQL_PASSWORD
value: zabbix
- name: MYSQL_ROOT_PASSWORD
value: Abc123@
- name: ZBX_HISTORYSTORAGEURL
value: http://192.168.2.171:9200
- name: ZBX_HISTORYSTORAGETYPES
value: uint,dbl,str,log,text
ports:
- containerPort: 10051