一. Core Concepts
二. Multi-container Pods
三. Pod design
四. Configuration
五. Observability
六. Services and Networking
七. State Persistence
===================================================
一. Core Concepts
Create a namespace called 'mynamespace' and a pod with image nginx called nginx on this namespace
$ kubectl create namespace mynamespace
$ kubectl get ns
NAME STATUS AGE
mynamespace Active 6m13s
$ kubectl run nginx --image=nginx --restart=Never -n mynamespace
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 5m2s
Create the pod that was just described using YAML
kubectl run nginx --image=nginx --restart=Never --dry-run=client -n mynamespace -o yaml > pod.yaml
cat pod.yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
delete old nginx pod and recreate one
kubectl delete po nginx -n mynamespace
kubectl create -f pod.yaml -n mynamespace
Alternatively, you can run in one line
kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml | kubectl create -n mynamespace -f -
Create a busybox pod (using kubectl command) that runs the command "env". Run it and see the output
$ kubectl run busybox --image=busybox --command --restart=Never -it -- env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=busybox TERM=xterm KUBERNETES_PORT=tcp://10.244.64.1:443 KUBERNETES_PORT_443_TCP=tcp://10.244.64.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.244.64.1 KUBERNETES_SERVICE_HOST=10.244.64.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 HOME=/root
$ kubectl run busybox --image=busybox --command --restart=Never -- env
pod/busybox created
$ kubectl logs busybox
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=busybox KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.244.64.1 KUBERNETES_SERVICE_HOST=10.244.64.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.244.64.1:443 KUBERNETES_PORT_443_TCP=tcp://10.244.64.1:443 HOME=/root
Create a busybox pod (using YAML) that runs the command "env". Run it and see the output
$ kubectl run busybox --image=busybox --restart=Never --dry-run=client -o yaml --command -- env > envpod.yaml $ cat envpod.yam apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - command: - env image: busybox name: busybox resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
$ kubectl apply -f envpod.yaml
pod/busybox created
$ kubectl logs busybox
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=busybox KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.244.64.1:443 KUBERNETES_PORT_443_TCP=tcp://10.244.64.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.244.64.1 KUBERNETES_SERVICE_HOST=10.244.64.1 KUBERNETES_SERVICE_PORT=443 HOME=/root
Get the YAML for a new namespace called 'myns' without creating it
$ kubectl create namespace myns -o yaml --dry-run=client apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: myns spec: {} status: {}
Get the YAML for a new ResourceQuota called 'myrq' with hard limits of 1 CPU, 1G memory and 2 pods without creating it
$ kubectl create quota myrq --hard=cpu=1,memory=1G,pods=2 --dry-run=client -o yaml apiVersion: v1 kind: ResourceQuota metadata: creationTimestamp: null name: myrq spec: hard: cpu: "1" memory: 1G pods: "2" status: {}
Get pods on all namespaces
$ kubectl get po --all-namespaces
### or
$ kubectl get po -A
Create a pod with image nginx called nginx and expose traffic on port 80
$ kubectl run nginx --image=nginx --restart=Never --port=80 pod/nginx created
Change pod's image to nginx:1.7.1. Observe that the container will be restarted as soon as the image gets pulled
$ kubectl set image pod/nginx nginx=nginx:1.7.1 pod/nginx image updated $ kubectl describe po nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m53s default-scheduler Successfully assigned default/nginx to NodeIP Normal Pulling 4m52s kubelet Pulling image "nginx" Normal Pulled 4m38s kubelet Successfully pulled image "nginx" Normal Created 4m38s kubelet Created container nginx Normal Started 4m38s kubelet Started container nginx Normal Killing 110s kubelet Container nginx definition changed, will be restarted Normal Pulling 110s kubelet Pulling image "nginx:1.7.1" $ kubectl get po nginx -w
Note: you can check pod's image by running
$ kubectl get po nginx -o jsonpath='{.spec.containers[].image}{" "}' nginx:1.7.1
Get nginx pod's ip created in previous step, use a temp busybox image to wget its '/'
$ kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 0/1 Completed 0 20m 10.244.26.205 NodeIP <none> <none> nginx 1/1 Running 0 11m 10.244.26.206 NodeIP <none> <none>
$ kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- 10.244.26.206:80
Alternatively you can also try a more advanced option:
# Get IP of the nginx pod NGINX_IP=$(kubectl get pod nginx -o jsonpath='{.status.podIP}') # create a temp busybox pod kubectl run busybox --image=busybox --env="NGINX_IP=$NGINX_IP" --rm -it --restart=Never -- sh -c 'wget -O- $NGINX_IP:80'
Or just in one line:
$ kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- $(kubectl get pod nginx -o jsonpath='{.status.podIP}:{.spec.containers[0].ports[0].containerPort}')
Connecting to 10.244.26.206:80 (10.244.26.206:80)
writing to stdout
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- 100% |********************************| 612 0:00:00 ETA
written to stdout
pod "busybox" deleted
Get pod's YAML
kubectl get po nginx -o yaml # or kubectl get po nginx -oyaml # or kubectl get po nginx --output yaml # or kubectl get po nginx --output=yaml
Get information about the pod, including details about potential issues (e.g. pod hasn't started)
kubectl describe po nginx
Get pod logs
kubectl logs nginx
If pod crashed and restarted, get logs about the previous instance
$ kubectl logs nginx -p /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up
Execute a simple shell on the nginx pod
$ kubectl exec -it nginx -- /bin/sh
Create a busybox pod that echoes 'hello world' and then exits
$ kubectl run busybox --image=busybox -it --restart=Never -- echo 'hello world' hello world # or
$ kubectl run busybox --image=busybox -it --restart=Never -- /bin/sh -c 'echo hello world' hello world
Do the same, but have the pod deleted automatically when it's completed
$ kubectl run busybox --image=busybox -it --rm --restart=Never -- /bin/sh -c 'echo hello world'
hello world pod "busybox" deleted
Create an nginx pod and set an env value as 'var1=val1'. Check the env value existence within the pod
kubectl run nginx --image=nginx --restart=Never --env=var1=val1 # then kubectl exec -it nginx -- env # or kubectl exec -it nginx -- sh -c 'echo $var1' # or kubectl describe po nginx | grep val1 # or kubectl run nginx --restart=Never --image=nginx --env=var1=val1 -it --rm -- env
二. Multi-container Pods
Create a Pod with two containers, both with image busybox and command "echo hello; sleep 3600". Connect to the second container and run 'ls'
Easiest way to do it is create a pod with a single container and save its definition in a YAML file:
$ kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- /bin/sh -c 'echo hello;sleep 3600' > pod.yaml
vim pod.yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - args: - /bin/sh - -c - echo hello;sleep 3600 image: busybox name: busybox resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
Copy/paste the container related values, so your final YAML should contain the following two containers (make sure those containers have a different name):
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - args: - /bin/sh - -c - echo hello;sleep 3600 image: busybox imagePullPolicy: IfNotPresent name: busybox resources: {} - args: - /bin/sh - -c - echo hello;sleep 3600 image: busybox name: busybox2 dnsPolicy: ClusterFirst restartPolicy: Never status: {}
then
$ kubectl create -f pod.yaml NAME READY STATUS RESTARTS AGE busybox 2/2 Running 0 3s
$ kubectl exec -it busybox -c busybox2 -- /bin/sh
/ # ls
bin dev etc home proc root sys tmp usr var
/ # exit
# or you can do the above with just an one-liner
kubectl exec -it busybox -c busybox2 -- ls
# you can do some cleanup
kubectl delete po busybox
Create pod with nginx container exposed at port 80. Add a busybox init container which downloads a page using "wget -O /work-dir/index.html http://neverssl.com/online". Make a volume of type emptyDir and mount it in both containers. For the nginx container, mount it on "/usr/share/nginx/html" and for the initcontainer, mount it on "/work-dir". When done, get the IP of the created pod and create a busybox pod and run "wget -O- IP"
$ kubectl run web --image=nginx --restart=Never --port=80 --dry-run=client -o yaml > pod-init.yaml
$ cat pod-init.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: web name: web spec: containers: - image: nginx name: web ports: - containerPort: 80 resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
Copy/paste the container related values, so your final YAML should contain the volume and the initContainer:
Volume:
containers: - image: nginx ... volumeMounts: - name: vol mountPath: /usr/share/nginx/html volumes: - name: vol emptyDir: {}
initContainer:
... initContainers: - args: - /bin/sh - -c - wget -O /work-dir/index.html http://neverssl.com/online image: busybox name: box volumeMounts: - name: vol mountPath: /work-dir
cat pod-init.yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: web name: web spec: initContainers: - args: - /bin/sh - -c - wget -O /work-dir/index.html http://neverssl.com/online image: busybox name: box volumeMounts: - name: vol mountPath: /work-dir containers: - image: nginx name: nginx ports: - containerPort: 80 resources: {} volumeMounts: - name: vol mountPath: /usr/share/nginx/html volumes: - name: vol emptyDir: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
# Apply pod $ kubectl apply -f pod-init.yaml pod/web created # Get IP $ kubectl get po -o wide # Execute wget $ kubectl run box --image=busybox --restart=Never -ti --rm -- /bin/sh -c "wget -O- 10.244.26.218"
Connecting to 10.244.26.218 (10.244.26.218:80) writing to stdout <html> <head> <title>NeverSSL - helping you get online</title> <style> body { font-family: Montserrat, helvetica, arial, sans-serif; font-size: 16x; color: #444444; margin: 0; } h2 { font-weight: 700; font-size: 1.6em; margin-top: 30px; } p { line-height: 1.6em; } .container { max- 650px; margin: 20px auto 20px auto; padding-left: 15px; padding-right: 15px } .header { background-color: #42C0FD; color: #FFFFFF; padding: 10px 0 10px 0; font-size: 2.2em; } <!-- CSS from Mark Webster https://gist.github.com/markcwebster/9bdf30655cdd5279bad13993ac87c85d --> </style> </head> <body> <div class="header"> <div class="container"> <h1>NeverSSL</h1> </div> </div> <div class="content"> <div class="container"> <h2>What?</h2> <p>This website is for when you try to open Facebook, Google, Amazon, etc on a wifi network, and nothing happens. Type "http://neverssl.com" into your browser's url bar, and you'll be able to log on.</p> <h2>How?</h2> <p>neverssl.com will never use SSL (also known as TLS). No encryption, no strong authentication, no <a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS</a>, no HTTP/2.0, just plain old unencrypted HTTP and forever stuck in the dark ages of internet security.</p> <h2>Why?</h2> <p>Normally, that's a bad idea. You should always use SSL and secure encryption when possible. In fact, it's such a bad idea that most websites are now using https by default.</p> <p>And that's great, but it also means that if you're relying on poorly-behaved wifi networks, it can be hard to get online. Secure browsers and websites using https make it impossible for those wifi networks to send you to a login or payment page. Basically, those networks can't tap into your connection just like attackers can't. Modern browsers are so good that they can remember when a website supports encryption and even if you type in the website name, they'll use https.</p> <p>And if the network never redirects you to this page, well as you can see, you're not missing much.</p> <a href="https://twitter.com/neverssl?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-size="large" data-show-count="false">Follow @neverssl</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> </div> </body> </html> - 100% |********************************| 2778 0:00:00 ETA written to stdout pod "box" deleted
# you can do some cleanup $ kubectl delete po web
三. Pod design
Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1
$ kubectl run nginx1 --image=nginx --restart=Never --labels=app=v1 pod/nginx1 created $ kubectl run nginx2 --image=nginx --restart=Never --labels=app=v1 pod/nginx2 created $ kubectl run nginx3 --image=nginx --restart=Never --labels=app=v1 pod/nginx3 created
Show all labels of the pods
$ kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx1 1/1 Running 0 3m5s app=v1 nginx2 1/1 Running 0 2m59s app=v1 nginx3 1/1 Running 0 2m52s app=v1
Change the labels of pod 'nginx2' to be app=v2
$ kubectl label po nginx2 app=v2 --overwrite
pod/nginx2 labeled
Get the label 'app' for the pods (show a column with APP labels)
$ kubectl get po -L app NAME READY STATUS RESTARTS AGE APP nginx1 1/1 Running 0 5m14s v1 nginx2 1/1 Running 0 5m8s v2 nginx3 1/1 Running 0 5m1s v1
$ kubectl get po --label-columns=app
NAME READY STATUS RESTARTS AGE APP
nginx1 1/1 Running 0 6m v1
nginx2 1/1 Running 0 5m54s v2
nginx3 1/1 Running 0 5m47s v1
Get only the 'app=v2' pods
$ kubectl get po -l app=v2 NAME READY STATUS RESTARTS AGE nginx2 1/1 Running 0 8m51s # or kubectl get po -l 'app in (v2)' # or kubectl get po --selector=app=v2
Remove the 'app' label from the pods we created before
$ kubectl label po nginx1 nginx2 nginx3 app- pod/nginx1 labeled pod/nginx2 labeled pod/nginx3 labeled # or kubectl label po nginx{1..3} app- # or kubectl label po -l app app- $ kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx1 1/1 Running 0 11m <none> nginx2 1/1 Running 0 11m <none> nginx3 1/1 Running 0 11m <none>
Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'
$ kubectl label nodes <node name> accelerator=nvidia-tesla-p100
$ kubectl get nodes --show-labels
We can use the 'nodeSelector' property on the Pod YAML:
apiVersion: v1 kind: Pod metadata: name: cuda-test spec: containers: - name: cuda-test image: "k8s.gcr.io/cuda-vector-add:v0.1" nodeSelector: # add this accelerator: nvidia-tesla-p100 # the selection label
You can easily find out where in the YAML it should be placed by:
kubectl explain po.spec
OR: Use node affinity (https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/#schedule-a-pod-using-required-node-affinity)
apiVersion: v1 kind: Pod metadata: name: affinity-pod spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: accelerator operator: In values: - nvidia-tesla-p100 containers: ...
Annotate pods nginx1, nginx2, nginx3 with "description='my description'" value
$ kubectl annotate po nginx1 nginx2 nginx3 description="my description" pod/nginx1 annotated pod/nginx2 annotated pod/nginx3 annotated #or $ kubectl annotate po nginx{1..3} description='my description'
Check the annotations for pod nginx1
$ kubectl describe po nginx1 | grep -i 'annotations' Annotations: cni.projectcalico.org/podIP: 10.244.26.220/32 $ kubectl get pods -o custom-columns=Name:metadata.name,ANNOTATIONS:metadata.annotations.description Name ANNOTATIONS nginx1 my description nginx2 my description nginx3 my description
As an alternative to using | grep
you can use jsonPath like kubectl get po nginx1 -o jsonpath='{.metadata.annotations}{"
"}'
Remove the annotations for these three pods
$ kubectl annotate po nginx{1..3} description- pod/nginx1 annotated pod/nginx2 annotated pod/nginx3 annotated
Remove these pods to have a clean state in your cluster
$ kubectl delete po nginx{1..3}
pod "nginx1" deleted
pod "nginx2" deleted
pod "nginx3" deleted
Deployments
Create a deployment with image nginx:1.7.8, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don't create a service for this deployment)
$ kubectl create deployment nginx --image=nginx:1.7.8 --dry-run=client -o yaml > deploy.yaml $ vim deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx:1.7.8 name: nginx resources: {} status: {} ---------------------------- apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: nginx spec: replicas: 2 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx:1.7.8 name: nginx ports: - containerPort: 80 resources: {} status: {} $ kubectl apply -f deploy.yaml
or, do something like:
$ kubectl create deployment nginx --image=nginx:1.7.8 --dry-run=client -o yaml | sed 's/replicas: 1/replicas: 2/g' | sed 's/image: nginx:1.7.8/image: nginx:1.7.8 ports: - containerPort: 80/g' | kubectl apply -f -
or,
$ kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
View the YAML of this deployment
$ kubectl get deploy nginx -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.7.8","name":"nginx","ports":[{"containerPort":80}],"resources":{}}]}}},"status":{}} creationTimestamp: "2020-10-24T15:41:12Z" generation: 1 labels: app: nginx managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:deployment.kubernetes.io/revision: {} f:status: f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:replicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2020-10-24T15:41:12Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:app: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:spec: f:containers: k:{"name":"nginx"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:ports: .: {} k:{"containerPort":80,"protocol":"TCP"}: .: {} f:containerPort: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: kubectl operation: Update time: "2020-10-24T15:41:12Z" name: nginx namespace: default resourceVersion: "42281" selfLink: /apis/apps/v1/namespaces/default/deployments/nginx uid: c6fb72bb-52cf-46a9-91e0-d2cf428cd309 spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx:1.7.8 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: conditions: - lastTransitionTime: "2020-10-24T15:41:12Z" lastUpdateTime: "2020-10-24T15:41:12Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2020-10-24T15:41:12Z" lastUpdateTime: "2020-10-24T15:41:12Z" message: ReplicaSet "nginx-5b6f47948" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 1 replicas: 2 unavailableReplicas: 2 updatedReplicas: 2
View the YAML of the replica set that was created by this deployment
$ kubectl describe deploy nginx # you'll see the name of the replica set on the Events section and in the 'NewReplicaSet' property # OR you can find rs directly by: $ kubectl get rs -l run=nginx # if you created deployment by 'run' command $ kubectl get rs -l app=nginx # if you created deployment by 'create' command # you could also just do kubectl get rs $ kubectl get rs nginx-7bf7478b77 -o yaml
Get the YAML for one of the pods
kubectl get po # get all the pods # OR you can find pods directly by: kubectl get po -l run=nginx # if you created deployment by 'run' command kubectl get po -l app=nginx # if you created deployment by 'create' command kubectl get po nginx-7bf7478b77-gjzp8 -o yaml
Check how the deployment rollout is going
$ kubectl rollout status deploy nginx Waiting for deployment "nginx" rollout to finish: 0 of 2 updated replicas are available...
Update the nginx image to nginx:1.7.9
$ kubectl set image deploy nginx nginx=nginx:1.7.9 deployment.apps/nginx image updated # alternatively... kubectl edit deploy nginx # change the .spec.template.spec.containers[0].image
The syntax of the 'kubectl set image' command is kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N [options]
Check the rollout history and confirm that the replicas are OK
$ kubectl rollout history deploy nginx
deployment.apps/nginx REVISION CHANGE-CAUSE 1 <none> 2 <none> $ kubectl get deploy nginx $ kubectl get rs # check that a new replica set has been created $ kubectl get po
Undo the latest rollout and verify that new pods have the old image (nginx:1.7.8)
kubectl rollout undo deploy nginx # wait a bit kubectl get po # select one 'Running' Pod kubectl describe po nginx-5ff4457d65-nslcl | grep -i image # should be nginx:1.7.8
Do an on purpose update of the deployment with a wrong image nginx:1.91
kubectl set image deploy nginx nginx=nginx:1.91 # or kubectl edit deploy nginx # change the image to nginx:1.91 # vim tip: type (without quotes) '/image' and Enter, to navigate quickly
Verify that something's wrong with the rollout
kubectl rollout status deploy nginx # or kubectl get po # you'll see 'ErrImagePull'
Return the deployment to the second revision (number 2) and verify the image is nginx:1.7.9
kubectl rollout undo deploy nginx --to-revision=2 kubectl describe deploy nginx | grep Image: kubectl rollout status deploy nginx # Everything should be OK
Check the details of the fourth revision (number 4)
kubectl rollout history deploy nginx --revision=4 # You'll also see the wrong image displayed here
Scale the deployment to 5 replicas
kubectl scale deploy nginx --replicas=5 kubectl get po kubectl describe deploy nginx
Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%
kubectl autoscale deploy nginx --min=5 --max=10 --cpu-percent=80
Pause the rollout of the deployment
kubectl rollout pause deploy nginx
Update the image to nginx:1.9.1 and check that there's nothing going on, since we paused the rollout
kubectl set image deploy nginx nginx=nginx:1.9.1 # or kubectl edit deploy nginx # change the image to nginx:1.9.1 kubectl rollout history deploy nginx # no new revision
Resume the rollout and check that the nginx:1.9.1 image has been applied
kubectl rollout resume deploy nginx kubectl rollout history deploy nginx kubectl rollout history deploy nginx --revision=6 # insert the number of your latest revision
Delete the deployment and the horizontal pod autoscaler you created
kubectl delete deploy nginx
kubectl delete hpa nginx
#Or
kubectl delete deploy/nginx hpa/nginx
Jobs
Create a job named pi with image perl that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"
$ kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
Wait till it's done, get the output
kubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big) kubectl get po # get the pod name kubectl logs pi-**** # get the pi numbers kubectl delete job pi
Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'
kubectl create job busybox --image=busybox -- /bin/sh -c 'echo hello;sleep 30;echo world'
Follow the logs for the pod (you'll wait for 30 seconds)
kubectl get po # find the job pod kubectl logs busybox-ptx58 -f # follow the logs
See the status of the job, describe it and see the logs
kubectl get jobs kubectl describe jobs busybox kubectl logs job/busybox
Delete the job
kubectl delete job busybox
Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute
kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'while true; do echo hello; sleep 10;done' > job.yaml vi job.yaml
Add job.spec.activeDeadlineSeconds=30
apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null labels: run: busybox name: busybox spec: activeDeadlineSeconds: 30 # add this line template: metadata: creationTimestamp: null labels: run: busybox spec: containers: - args: - /bin/sh - -c - while true; do echo hello; sleep 10;done image: busybox name: busybox resources: {} restartPolicy: OnFailure status: {}
Create the same job, make it run 5 times, one after the other. Verify its status and delete it
kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'echo hello;sleep 30;echo world' > job.yaml vi job.yaml
Add job.spec.completions=5
apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null labels: run: busybox name: busybox spec: completions: 5 # add this line template: metadata: creationTimestamp: null labels: run: busybox spec: containers: - args: - /bin/sh - -c - echo hello;sleep 30;echo world image: busybox name: busybox resources: {} restartPolicy: OnFailure status: {}
Create the same job, but make it run 5 parallel times
apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null labels: run: busybox name: busybox spec: parallelism: 5 # add this line template: metadata: creationTimestamp: null labels: run: busybox spec: containers: - args: - /bin/sh - -c - echo hello;sleep 30;echo world image: busybox name: busybox resources: {} restartPolicy: OnFailure status: {}
Cron jobs
Create a cron job with image busybox that runs on a schedule of "*/1 * * * *" and writes 'date; echo Hello from the Kubernetes cluster' to standard output
kubectl create cronjob busybox --image=busybox --schedule="*/1 * * * *" -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster'
See its logs and delete it
kubectl get cj kubectl get jobs --watch kubectl get po --show-labels # observe that the pods have a label that mentions their 'parent' job kubectl logs busybox-1529745840-m867r # Bear in mind that Kubernetes will run a new job/pod for each new cron job kubectl delete cj busybox
Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its schedule.
kubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule="* * * * *" -o yaml -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml vi time-limited-job.yaml
Add cronjob.spec.startingDeadlineSeconds=17
apiVersion: batch/v1beta1 kind: CronJob metadata: creationTimestamp: null name: time-limited-job spec: startingDeadlineSeconds: 17 # add this line jobTemplate: metadata: creationTimestamp: null name: time-limited-job spec: template: metadata: creationTimestamp: null spec: containers: - args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster image: busybox name: time-limited-job resources: {} restartPolicy: Never schedule: '* * * * *' status: {}
ConfigMaps
Create a configmap named config with values foo=lala,foo2=lolo
$ kubectl create configmap config --from-literal=foo=lala --from-literal=foo2=lolo
Display its values
$ kubectl get cm config -o yaml apiVersion: v1 data: foo: lala foo2: lolo kind: ConfigMap metadata: creationTimestamp: "2020-10-24T17:03:56Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:foo: {} f:foo2: {} manager: kubectl operation: Update time: "2020-10-24T17:03:56Z" name: config namespace: default resourceVersion: "54639" selfLink: /api/v1/namespaces/default/configmaps/config uid: 86368a2c-2cf1-4011-83e1-891dcd7999a4 # or kubectl describe cm config
Create and display a configmap from a file
Create the file with
$ echo -e "foo3=lili foo4=lele" > config.txt $ kubectl create cm configmap2 --from-file=config.txt configmap/configmap2 created $ kubectl get cm configmap2 -o yaml apiVersion: v1 data: config.txt: | foo3=lili foo3=lele kind: ConfigMap metadata: creationTimestamp: "2020-10-24T17:08:27Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:config.txt: {} manager: kubectl operation: Update time: "2020-10-24T17:08:27Z" name: configmap2 namespace: default resourceVersion: "55284" selfLink: /api/v1/namespaces/default/configmaps/configmap2 uid: 75866b28-3f9a-4aba-b04c-6382037fdee5
Create and display a configmap from a .env file
Create the file with the command
echo -e "var1=val1 # this is a comment var2=val2 #anothercomment" > config.env
$ kubectl create cm configmap3 --from-env-file=config.env configmap/configmap3 created $ kubectl get cm configmap3 -o yaml apiVersion: v1 data: var1: val1 var2: var2 kind: ConfigMap metadata: creationTimestamp: "2020-10-24T17:14:38Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:var1: {} f:var2: {} manager: kubectl operation: Update time: "2020-10-24T17:14:38Z" name: configmap3 namespace: default resourceVersion: "56168" selfLink: /api/v1/namespaces/default/configmaps/configmap3 uid: e24f22d7-c35d-4e00-9f38-9a61040b5616
Create and display a configmap from a file, giving the key 'special'
Create the file with
echo -e "var3=val3 var4=val4" > config4.txt
$ kubectl create cm configmap4 --from-file=special=config4.txt configmap/configmap4 created $ kubectl describe cm configmap4 Name: configmap4 Namespace: default Labels: <none> Annotations: <none> Data ==== special: ---- var3=val3 var4=val4 Events: <none> $ kubectl get cm configmap4 -o yaml apiVersion: v1 data: special: | var3=val3 var4=val4 kind: ConfigMap metadata: creationTimestamp: "2020-10-24T17:20:09Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:special: {} manager: kubectl operation: Update time: "2020-10-24T17:20:09Z" name: configmap4 namespace: default resourceVersion: "56953" selfLink: /api/v1/namespaces/default/configmaps/configmap4 uid: 526872f9-a313-48ff-a794-50633b0ed011
Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'
$ kubectl create cm options --from-literal=var5=val5 configmap/options created $ kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml $ cat pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ vim pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} env: - name: option # name of the env variable valueFrom: configMapKeyRef: name: options # name of config map key: var5 # name of the entity in config map dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl apply -f pod.yaml pod/nginx created $ kubectl exec -it nginx -- env | grep option option=val5
Create a configMap 'anotherone' with values 'var6=val6', 'var7=val7'. Load this configMap as env variables into a new nginx pod
$ kubectl create configmap anotherone --from-literal=var6=var6 --from-literal=var7=var7 configmap/anotherone created $ kubectl run --restart=Never nginx --image=nginx -o yaml --dry-run=client > pod.yaml $ cat pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ vim pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} envFrom: # different than previous one, that was 'env' - configMapRef: # different from the previous one, was 'configMapKeyRef' name: anotherone # the name of the config map dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl exec -it nginx -- env var6=var6 var7=var7
Create a configMap 'cmvolume' with values 'var8=val8', 'var9=val9'. Load this as a volume inside an nginx pod on path '/etc/lala'. Create the pod and 'ls' into the '/etc/lala' directory.
$ kubectl create configmap cmvolume --from-literal=var8=val8 --from-literal=var9=val9 configmap/cmvolume created $ kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: volumes: # add a volumes list - name: myvolume # just a name, you'll reference this in the pods configMap: name: cmvolume # name of your configmap containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} volumeMounts: # your volume mounts are listed here - name: myvolume # the name that you specified in pod.spec.volumes.name mountPath: /etc/lala # the path inside your container dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl exec -it nginx -- /bin/sh $ cd /etc/lala $ ls # will show var8 var9 $ cat var8 # will show val8
SecurityContext
Create the YAML for an nginx pod that runs with the user ID 101. No need to create the pod
$ kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: securityContext: # insert this line runAsUser: 101 # UID for the user containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
Create the YAML for an nginx pod that has the capabilities "NET_ADMIN", "SYS_TIME" added on its single container
$ kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx securityContext: # insert this line capabilities: # and this add: ["NET_ADMIN", "SYS_TIME"] # this as well resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {}
Requests and limits
Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi
kubectl run nginx --image=nginx --restart=Never --requests='cpu=100m,memory=256Mi' --limits='cpu=200m,memory=512Mi'
Secrets
Create a secret called mysecret with the values password=mypass
kubectl create secret generic mysecret --from-literal=password=mypass
Create a secret called mysecret2 that gets key/value from a file
Create a file called username with the value admin:
echo -n admin > username
kubectl create secret generic mysecret2 --from-file=username
Get the value of mysecret2
kubectl get secret mysecret2 -o yaml echo YWRtaW4K | base64 -d # on MAC it is -D, which decodes the value and shows 'admin'
Alternative:
kubectl get secret mysecret2 -o jsonpath='{.data.username}{" "}' | base64 -d # on MAC it is -D
Create an nginx pod that mounts the secret mysecret2 in a volume on path /etc/foo
$ kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: volumes: # specify the volumes - name: foo # this name will be used for reference inside the container secret: # we want a secret secretName: mysecret2 # name of the secret - this must already exist on pod creation containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} volumeMounts: # our volume mounts - name: foo # name on pod.spec.volumes mountPath: /etc/foo #our mount path dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl exec -it nginx /bin/bash $ ls /etc/foo # shows username $ cat /etc/foo/username # shows admin
Delete the pod you just created and mount the variable 'username' from secret mysecret2 onto a new nginx pod in env variable called 'USERNAME'
$ kubectl delete po nginx $ kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} env: # our env variables - name: USERNAME # asked name valueFrom: secretKeyRef: # secret reference name: mysecret2 # our secret's name key: username # the key of the data in the secret dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl exec -it nginx -- env | grep USERNAME | cut -d '=' -f 2 # will show 'admin'
ServiceAccounts
See all the service accounts of the cluster in all namespaces
$ kubectl get sa --all-namespaces # or $ kubectl get sa -A
Create a new serviceaccount called 'myuser'
$ kubectl create sa myuser serviceaccount/myuser created # Alternatively: $ kubectl get sa default -o yaml > sa.yaml $ vim sa.yaml apiVersion: v1 kind: ServiceAccount metadata: name: myuser $ kubectl create -f sa.yaml
Create an nginx pod that uses 'myuser' as a service account
$ kubectl run nginx --image=nginx --restart=Never --serviceaccount=myuser -o yaml --dry-run=client > pod.yaml
$ kubectl apply -f pod.yaml
or you can add manually:
$ kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: serviceAccountName: myuser # we use pod.spec.serviceAccountName containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl describe pod nginx # will see that a new secret called myuser-token-***** has been mounted
五. Observability
Liveness and readiness probes
Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it.
$ kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} livenessProbe: # our probe exec: # add this line command: # command definition - ls # ls command dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl describe pod nginx | grep -i liveness Liveness: exec [ls] delay=0s timeout=1s period=10s #success=1 #failure=3 $ kubectl delete -f pod.yaml
Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds. Run it, check the probe, delete it.
$ kubectl explain pod.spec.containers.livenessProbe # get the exact names KIND: Pod VERSION: v1 RESOURCE: livenessProbe <Object> DESCRIPTION: Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. FIELDS: exec <Object> One and only one of the following should be specified. Exec specifies the action to take. failureThreshold <integer> Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. httpGet <Object> HTTPGet specifies the http request to perform. initialDelaySeconds <integer> Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds <integer> How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold <integer> Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket <Object> TCPSocket specifies an action involving a TCP port. TCP hooks not yet supported timeoutSeconds <integer> Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
pod.yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} livenessProbe: initialDelaySeconds: 5 # add this line periodSeconds: 5 # add this line as well exec: command: - ls dnsPolicy: ClusterFirst restartPolicy: Never status: {}
then
$ kubectl create -f pod.yaml $ kubectl describe po nginx | grep -i liveness Liveness: exec [ls] delay=5s timeout=1s period=5s #success=1 #failure=3 $ kubectl delete -f pod.yaml
Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80. Again, run it, check the readinessProbe, delete it.
$ kubectl run nginx --image=nginx --dry-run=client -o yaml --restart=Never --port=80 > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} ports: - containerPort: 80 # Note: Readiness probes runs on the container during its whole lifecycle. Since nginx exposes 80, containerPort: 80 is not required for readiness to work. readinessProbe: # declare the readiness probe httpGet: # add this line path: / # port: 80 # dnsPolicy: ClusterFirst restartPolicy: Never status: {} $ kubectl create -f pod.yaml $ kubectl describe pod nginx | grep -i readiness # to see the pod readiness details $ kubectl delete -f pod.yaml
Logging
Create a busybox pod that runs 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'. Check its logs
$ kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done' pod/busybox created $ kubectl logs busybox -f 0:Sun Oct 25 10:34:24 UTC 2020 1:Sun Oct 25 10:34:25 UTC 2020 2:Sun Oct 25 10:34:26 UTC 2020 3:Sun Oct 25 10:34:27 UTC 2020 4:Sun Oct 25 10:34:28 UTC 2020 5:Sun Oct 25 10:34:29 UTC 2020 6:Sun Oct 25 10:34:30 UTC 2020 7:Sun Oct 25 10:34:31 UTC 2020 8:Sun Oct 25 10:34:32 UTC 2020 9:Sun Oct 25 10:34:33 UTC 2020 10:Sun Oct 25 10:34:34 UTC 2020
Debugging
Create a busybox pod that runs 'ls /notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod
$ kubectl run busybox --restart=Never --image=busybox -- /bin/sh -c 'ls /notexist' pod/busybox created $ kubectl logs busybox ls: /notexist: No such file or directory $ kubectl describe po busybox $ kubectl delete po busybox
Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period
kubectl run busybox --restart=Never --image=busybox -- notexist kubectl logs busybox # will bring nothing! container never started kubectl describe po busybox # in the events section, you'll see the error # also... kubectl get events | grep -i error # you'll see the error here as well kubectl delete po busybox --force --grace-period=0
Get CPU/memory utilization for nodes (metrics-server must be running)
$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% NodeIP 146m 3% 2676Mi 18%
Create a pod with image nginx called nginx and expose its port 80
$ kubectl run nginx --image=nginx --restart=Never --port=80 --expose service/nginx created pod/nginx created
Confirm that ClusterIP has been created. Also check endpoints
$ kubectl get svc nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.244.104.17 <none> 80/TCP 69s $ kubectl get ep NAME ENDPOINTS AGE nginx 10.244.3.22:80 2m4s
Get service's ClusterIP, create a temp busybox pod and 'hit' that IP with wget
$ kubectl get svc nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.244.104.17 <none> 80/TCP 3m18s $ kubectl run busybox --rm --image=busybox -it --restart=Never -- sh If you don't see a command prompt, try pressing enter. / # wget -O- 10.244.104.17:80 Connecting to 10.244.104.17:80 (10.244.104.17:80) writing to stdout <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> - 100% |****************************************************************************************************************| 612 0:00:00 ETA written to stdout / # exit pod "busybox" deleted
or
IP=$(kubectl get svc nginx --template={{.spec.clusterIP}}) # get the IP (something like 10.108.93.130) kubectl run busybox --rm --image=busybox -it --restart=Never --env="IP=$IP" -- wget -O- $IP:80 --timeout 2 # Tip: --timeout is optional, but it helps to get answer more quickly when connection fails (in seconds vs minutes)
Convert the ClusterIP to NodePort for the same service and find the NodePort port. Hit service using Node's IP. Delete the service and the pod at the end.
$ kubectl edit svc nginx apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-06-25T07:55:16Z name: nginx namespace: default resourceVersion: "93442" selfLink: /api/v1/namespaces/default/services/nginx uid: 191e3dac-784d-11e8-86b1-00155d9f663c spec: clusterIP: 10.97.242.220 ports: - port: 80 protocol: TCP targetPort: 80 selector: run: nginx sessionAffinity: None type: NodePort # change cluster IP to nodeport status: loadBalancer: {} $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.244.64.1 <none> 443/TCP 147m nginx NodePort 10.244.104.17 <none> 80:30347/TCP 11m wget -O- NODE_IP:31931 # if you're using Kubernetes with Docker for Windows/Mac, try 127.0.0.1 #if you're using minikube, try minikube ip, then get the node ip such as 192.168.99.117 $ kubectl delete svc nginx # Deletes the service $ kubectl delete pod nginx # Deletes the pod
Create a deployment called foo using image 'dgkanatsios/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet)
$ kubectl create deploy foo --image=dgkanatsios/simpleapp deployment.apps/foo created $ kubectl expose deploy foo --port=8080 service/foo exposed $ kubectl scale deploy foo --replicas=3 deployment.apps/foo scaled $ kubectl get deploy foo -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2020-10-25T12:44:42Z" generation: 2 labels: app: foo managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:app: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:spec: f:containers: k:{"name":"simpleapp"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: kubectl operation: Update time: "2020-10-25T12:44:42Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:deployment.kubernetes.io/revision: {} f:status: f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2020-10-25T12:47:13Z" name: foo namespace: default resourceVersion: "24268" selfLink: /apis/apps/v1/namespaces/default/deployments/foo uid: 28b4e675-a49e-4e64-9d1f-1d382c2908e9 spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: foo strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: foo spec: containers: - image: dgkanatsios/simpleapp imagePullPolicy: Always name: simpleapp resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 3 conditions: - lastTransitionTime: "2020-10-25T12:44:42Z" lastUpdateTime: "2020-10-25T12:44:48Z" message: ReplicaSet "foo-6bd885fffd" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing - lastTransitionTime: "2020-10-25T12:47:13Z" lastUpdateTime: "2020-10-25T12:47:13Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 2 readyReplicas: 3 replicas: 3 updatedReplicas: 3
$ kubectl get svc foo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo ClusterIP 10.244.101.68 <none> 8080/TCP 5m23s
Get the pod IPs. Create a temp busybox pod and trying hitting them on port 8080
$ kubectl get pods -l app=foo -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES foo-6bd885fffd-f95m4 1/1 Running 0 5m45s 10.244.3.26 NodeIP <none> <none> foo-6bd885fffd-frr6f 1/1 Running 0 8m10s 10.244.3.24 NodeIP <none> <none> foo-6bd885fffd-wwrml 1/1 Running 0 5m45s 10.244.3.25 NodeIP <none> <none> $ kubectl run busybox --image=busybox --restart=Never -it --rm -- sh If you don't see a command prompt, try pressing enter. / # wget -O- 10.244.3.26:8080 Connecting to 10.244.3.26:8080 (10.244.3.26:8080) writing to stdout Hello world from foo-6bd885fffd-f95m4 and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # exit pod "busybox" deleted
Create a service that exposes the deployment on port 6262. Verify its existence, check the endpoints
$ kubectl expose deploy foo --port=6262 --target-port=8080 service/foo exposed $ kubectl get svc foo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE foo ClusterIP 10.244.97.118 <none> 6262/TCP 10s $ kubectl get endpoints foo NAME ENDPOINTS AGE foo 10.244.3.24:8080,10.244.3.25:8080,10.244.3.26:8080 24s
Create a temp busybox pod and connect via wget to foo service. Verify that each time there's a different hostname returned. Delete deployment and services to cleanup the cluster
kubectl get svc # get the foo service ClusterIP kubectl run busybox --image=busybox -it --rm --restart=Never -- sh wget -O- foo:6262 # DNS works! run it many times, you'll see different pods responding wget -O- SERVICE_CLUSTER_IP:6262 # ClusterIP works as well # you can also kubectl logs on deployment pods to see the container logs kubectl delete svc foo kubectl delete deploy foo
then actually
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE foo ClusterIP 10.244.97.118 <none> 6262/TCP 3m10s kubernetes ClusterIP 10.244.64.1 <none> 443/TCP 175m $ kubectl run busybox --image=busybox -it --rm --restart=Never -- sh If you don't see a command prompt, try pressing enter. / # wget -O- foo:6262 Connecting to foo:6262 (10.244.97.118:6262) writing to stdout Hello world from foo-6bd885fffd-frr6f and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # wget -O- foo:6262 Connecting to foo:6262 (10.244.97.118:6262) writing to stdout Hello world from foo-6bd885fffd-frr6f and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # wget -O- foo:6262 Connecting to foo:6262 (10.244.97.118:6262) writing to stdout Hello world from foo-6bd885fffd-f95m4 and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # wget -O- foo:6262 Connecting to foo:6262 (10.244.97.118:6262) writing to stdout Hello world from foo-6bd885fffd-wwrml and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # wget -O- foo:6262 Connecting to foo:6262 (10.244.97.118:6262) writing to stdout Hello world from foo-6bd885fffd-f95m4 and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # wget -O- 10.244.97.118:6262 Connecting to 10.244.97.118:6262 (10.244.97.118:6262) writing to stdout Hello world from foo-6bd885fffd-wwrml and version 2.0 - 100% |****************************************************************************************************************| 54 0:00:00 ETA written to stdout / # exit pod "busybox" deleted $ kubectl delete svc foo $ kubectl delete deploy foo
Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the deployment and apply it
$ kubectl create deployment nginx --image=nginx deployment.apps/nginx created $ kubectl scale deploy nginx --replicas=2 deployment.apps/nginx scaled $ kubectl expose deploy nginx --port=80 service/nginx exposed $ kubectl describe svc nginx Name: nginx Namespace: default Labels: app=nginx Annotations: <none> Selector: app=nginx Type: ClusterIP IP: 10.244.84.228 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.3.29:80,10.244.3.30:80 Session Affinity: None Events: <none> $ vi policy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx # pick a name spec: podSelector: matchLabels: app: nginx # selector for the pods ingress: # allow ingress traffic - from: - podSelector: # from pods matchLabels: # with this label access: granted # Create the NetworkPolicy $ kubectl create -f policy.yaml networkpolicy.networking.k8s.io/access-nginx created $ kubectl get NetworkPolicy NAME POD-SELECTOR AGE access-nginx app=nginx 11s $ kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://nginx:80 --timeout 2 //(# This should not work. --timeout is optional here. But it helps to get answer more quickly (in seconds vs minutes)) If you don't see a command prompt, try pressing enter. wget: download timed out pod "busybox" deleted pod default/busybox terminated (Error) $ kubectl run busybox --image=busybox --rm -it --restart=Never --labels=access=granted -- wget -O- http://nginx:80 --timeout 2 //(# This should be fine) Connecting to nginx:80 (10.244.84.228:80) writing to stdout <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> - 100% |********************************| 612 0:00:00 ETA written to stdout pod "busybox" deleted
七. State Persistence
Define volumes
Create busybox pod with two containers, each one will have the image busybox and will run the 'sleep 3600' command. Make both containers mount an emptyDir at '/etc/foo'. Connect to the second busybox, write the first column of '/etc/passwd' file to '/etc/foo/passwd'. Connect to the first busybox and write '/etc/foo/passwd' file to standard output. Delete pod.
Easiest way to do this is to create a template pod with:
$ kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- /bin/sh -c 'sleep 3600' > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: dnsPolicy: ClusterFirst restartPolicy: Never containers: - args: - /bin/sh - -c - sleep 3600 image: busybox imagePullPolicy: IfNotPresent name: busybox resources: {} volumeMounts: # - name: myvolume # mountPath: /etc/foo # - args: - /bin/sh - -c - sleep 3600 image: busybox name: busybox2 # don't forget to change the name during copy paste, must be different from the first container's name! volumeMounts: # - name: myvolume # mountPath: /etc/foo # volumes: # - name: myvolume # emptyDir: {} # $ kubectl exec -it busybox -c busybox2 -- /bin/sh / # cat /etc/passwd | cut -f 1 -d ':' > /etc/foo/passwd / # cat /etc/foo/passwd root daemon bin sys sync mail www-data operator nobody / # exit $ kubectl exec -it busybox -c busybox -- /bin/sh / # mount | grep foo /dev/vda1 on /etc/foo type ext4 (rw,relatime,data=ordered) / # cat /etc/foo/passwd root daemon bin sys sync mail www-data operator nobody
Create a PersistentVolume of 10Gi, called 'myvolume'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '/etc/foo'. Save it on pv.yaml, add it to the cluster. Show the PersistentVolumes that exist on the cluster
$ vi pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: myvolume spec: storageClassName: normal capacity: storage: 10Gi accessModes: - ReadWriteOnce - ReadWriteMany hostPath: path: /etc/foo $ kubectl apply -f pv.yaml persistentvolume/myvolume created $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE myvolume 10Gi RWO,RWX Retain Available normal 15s
Create a PersistentVolumeClaim for this storage class, called mypvc, a request of 4Gi and an accessMode of ReadWriteOnce, with the storageClassName of normal, and save it on pvc.yaml. Create it on the cluster. Show the PersistentVolumeClaims of the cluster. Show the PersistentVolumes of the cluster
$ vi pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc spec: storageClassName: normal accessModes: - ReadWriteOnce resources: requests: storage: 4Gi $ kubectl create -f pvc.yaml persistentvolumeclaim/mypvc created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound myvolume 10Gi RWO,RWX normal 9s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE myvolume 10Gi RWO,RWX Retain Bound default/mypvc normal 7m4s
Create a busybox pod with command 'sleep 3600', save it on pod.yaml. Mount the PersistentVolumeClaim to '/etc/foo'. Connect to the 'busybox' pod, and copy the '/etc/passwd' file to '/etc/foo/passwd'
$ kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- /bin/sh -c 'sleep 3600' > pod.yaml $ vi pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - args: - /bin/sh - -c - sleep 3600 image: busybox imagePullPolicy: IfNotPresent name: busybox resources: {} volumeMounts: # - name: myvolume # mountPath: /etc/foo # dnsPolicy: ClusterFirst restartPolicy: Never volumes: # - name: myvolume # persistentVolumeClaim: # claimName: mypvc # status: {} $ kubectl create -f pod.yaml pod/busybox created $ kubectl exec busybox -it -- cp /etc/passwd /etc/foo/passwd $ cat /etc/foo/passwd root:x:0:0:root:/root:/bin/sh daemon:x:1:1:daemon:/usr/sbin:/bin/false bin:x:2:2:bin:/bin:/bin/false sys:x:3:3:sys:/dev:/bin/false sync:x:4:100:sync:/bin:/bin/sync mail:x:8:8:mail:/var/spool/mail:/bin/false www-data:x:33:33:www-data:/var/www:/bin/false operator:x:37:37:Operator:/var:/bin/false nobody:x:65534:65534:nobody:/home:/bin/false
Create a second pod which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete pods to cleanup. Note: If you can't see the file from the second pod, can you figure out why? What would you do to fix that?
Create the second pod, called busybox2:
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox2 # change 'metadata.name: busybox' to 'metadata.name: busybox2' spec: containers: - args: - /bin/sh - -c - sleep 3600 image: busybox imagePullPolicy: IfNotPresent name: busybox resources: {} volumeMounts: - name: myvolume mountPath: /etc/foo dnsPolicy: ClusterFirst restartPolicy: Never volumes: - name: myvolume persistentVolumeClaim: claimName: mypvc status: {}
then
$ kubectl create -f pod.yaml pod/busybox2 created $ kubectl exec busybox2 -- ls /etc/foo passwd $ kubectl exec busybox2 -- cat /etc/foo/passwd root:x:0:0:root:/root:/bin/sh daemon:x:1:1:daemon:/usr/sbin:/bin/false bin:x:2:2:bin:/bin:/bin/false sys:x:3:3:sys:/dev:/bin/false sync:x:4:100:sync:/bin:/bin/sync mail:x:8:8:mail:/var/spool/mail:/bin/false www-data:x:33:33:www-data:/var/www:/bin/false operator:x:37:37:Operator:/var:/bin/false nobody:x:65534:65534:nobody:/home:/bin/false
If the file doesn't show on the second pod but it shows on the first, it has most likely been scheduled on a different node.
# check which nodes the pods are on kubectl get po busybox -o wide kubectl get po busybox2 -o wide
If they are on different nodes, you won't see the file, because we used the hostPath
volume type. If you need to access the same files in a multi-node cluster, you need a volume type that is independent of a specific node. There are lots of different types per cloud provider (see here)[https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes], a general solution could be to use NFS.
Create a busybox pod with 'sleep 3600' as arguments. Copy '/etc/passwd' from the pod to your local folder
$ kubectl run busybox --image=busybox --restart=Never -- sleep 3600 $ kubectl cp busybox:etc/passwd ./passwd $ cat passwd root:x:0:0:root:/root:/bin/sh daemon:x:1:1:daemon:/usr/sbin:/bin/false bin:x:2:2:bin:/bin:/bin/false sys:x:3:3:sys:/dev:/bin/false sync:x:4:100:sync:/bin:/bin/sync mail:x:8:8:mail:/var/spool/mail:/bin/false www-data:x:33:33:www-data:/var/www:/bin/false operator:x:37:37:Operator:/var:/bin/false nobody:x:65534:65534:nobody:/home:/bin/false