centos-master:172.16.100.60
centos-minion:172.16.100.62
k8s,etcd,docker等都是采用yum装的,部署参考的k8s权威指南和一个视频,视频在百度网盘里,忘记具体步骤了,安装不难,关键在于第一次接触,改的文件记不住下次在写个安装的步骤吧
首先安装heapster 我安装的是1.2.0的版本
个人感觉只要几个yaml文件就行了,里面其他东西干嘛用的,没用到嘛
[root@centos-master influxdb]# pwd /usr/src/heapster-1.2.0/deploy/kube-config/influxdb [root@centos-master influxdb]# ls grafana-deploment.yaml heapster-deployment.yaml influxdb-deployment.yaml grafana-service.yaml heapster-service.yaml influxdb-service.yaml
[root@centos-master influxdb]# cat heapster-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster namespace: kube-system spec: replicas: 2 template: metadata: labels: task: monitoring k8s-app: heapster spec: containers: - name: heapster image: docker.io/ist0ne/heapster-amd64:latest imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes:http://172.16.100.60:8080?inClusterConfig=false - --sink=influxdb:http://10.254.129.95:8086
关于--source和--sink
刚下载完的是这样的,具体代表什么可以查一下,100.60是集群的master,129.95是个啥?我忘了好像是某个kubectl get svc里面的某个地址,因为svc我删过,找不到原来的IP了
- --source=kubernetes:https://kubernetes.default - --sink=influxdb:http://monitoring-influxdb:8086
[root@centos-master influxdb]# cat heapster-service.yaml apiVersion: v1 kind: Service metadata: labels: kubernetes.io/cluster-service: 'true' kubernetes.io/name: Heapster name: heapster namespace: kube-system spec: ports: - port: 80 targetPort: 8082 selector: k8s-app: heapster
[root@centos-master influxdb]# cat grafana-deploment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-grafana namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: grafana spec: containers: - name: grafana image: docker.io/ist0ne/heapster-grafana-amd64:latest ports: - containerPort: 3000 protocol: TCP volumeMounts: - mountPath: /var name: grafana-storage env: - name: INFLUXDB_HOST value: monitoring-influxdb - name: GRAFANA_PORT value: "3000" - name: GF_AUTH_BASIC_ENABLED value: "false" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_SERVER_ROOT_URL value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ volumes: - name: grafana-storage emptyDir: {}
[root@centos-master influxdb]# cat grafana-service.yaml apiVersion: v1 kind: Service metadata: labels: kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-grafana name: monitoring-grafana namespace: kube-system spec: # In a production setup, we recommend accessing Grafana through an external Loadbalancer # or through a public IP. # type: LoadBalancer ports: - port: 80 targetPort: 3000 selector: name: influxGrafana
[root@centos-master influxdb]# cat influxdb-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-influxdb namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: influxdb spec: containers: - name: influxdb image: docker.io/ist0ne/heapster-influxdb-amd64:v1.1.1 volumeMounts: - mountPath: /data name: influxdb-storage volumes: - name: influxdb-storage emptyDir: {}
[root@centos-master influxdb]# cat influxdb-service.yaml apiVersion: v1 kind: Service metadata: labels: null name: monitoring-influxdb namespace: kube-system spec: ports: - name: http port: 8083 targetPort: 8083 - name: api port: 8086 targetPort: 8086 selector: name: influxGrafana
以上是全部的heapster需要的yaml文件
kubectl create -f ../influxdb
生成相应的pod和svc
然后就改创建具体的pod应用了
[root@centos-master yaml]# cat php-apache-rc.yaml apiVersion: v1 kind: ReplicationController metadata: name: php-apache spec: replicas: 1 template: metadata: name: php-apache labels: app: php-apache spec: containers: - name: php-apache image: siriuszg/hpa-example resources: requests: cpu: 200m ports: - containerPort: 80
[root@centos-master yaml]# cat php-apache-svc.yaml apiVersion: v1 kind: Service metadata: name: php-apache spec: ports: - port: 80 selector: app: php-apache
[root@centos-master yaml]# cat busybox.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox
[root@centos-master yaml]# cat hpa-php-apache.yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: v1 kind: ReplicationController name: php-apache minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 10
注意如果是集群状态,php-apache和busybox可能不是 node的,访问会出问题,要么把他们弄到一个node里要么安装flannel把node都连起来,迟早都要做这一步的。
kubect create -f 上面的这些文件
检查heaptser是否成功
[root@centos-master yaml]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% centos-minion 105m 2% 1368Mi 34%
如果出现上面的代表成功了,不知道为什么我的在这检测不到127.0.0.1的这个node
没出现的话仔细查看日志,查看node是否起来是否加入到了集群,/var/log/message 或者 kubectl describe hpa php-apache 或者查pod php-apapche的日志看看
还有一点就是网上都是用的kube-system这个namespace我用的时候总是检测不到,后来去掉了,用的默认的命名空间就是上面的配置,发现可以检测了,不知道原因
检查hpa
[root@centos-master yaml]# kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache ReplicationController/php-apache 10% 0% 1 10 23h [root@centos-master yaml]# kubectl get hpa --namespace=kube-system NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache ReplicationController/php-apache 50% <waiting> 1 10 20h
会发现默认的hpa是current是能检测到值的,但是之前用的kube-system的一直是 waitng状态
进入到busybox里进行压力测试
[root@centos-master ~]# kubectl exec -ti busybox -- sh / # while true; do wget -q -O- http://10.254.221.176 > /dev/null ; done
过十几秒发现pod增加了,而且cpu的current也增大了,但是有个问题,按理来说应该会自动收缩的,但是他只会自动扩容,收缩不成功,当停掉压力测试的时候,还是这么多的pod
并没有根据cpu的降低而减少pod的数量,奇怪
[root@centos-master yaml]# kubectl get pods -o wide | grep php-apache php-apache-5bcgk 1/1 Running 0 44s 10.0.34.2 127.0.0.1 php-apache-b4nv5 1/1 Running 0 44s 10.0.16.4 centos-minion php-apache-kw1m0 1/1 Running 0 44s 10.0.34.17 127.0.0.1 php-apache-vz2rx 1/1 Running 0 3h 10.0.16.3 centos-minion [root@centos-master yaml]# kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache ReplicationController/php-apache 10% 25% 1 10 23h [root@centos-master yaml]#