摘要:
1、Kube-scheduler作为组件运行在master节点,主要任务是把从kube-apiserver中获取的未被调度的pod通过一系列调度算法找到最适合的node,最终通过向kube-apiserver中写入Binding对象(其中指定了pod名字和调度后的node名字)来完成调度
2、kube-scheduler与kube-controller-manager一样,如果高可用,都是采用leader选举模式。启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
简单总结:kube-scheduler负责分配调度Pod到集群内的node节点监听kube-apiserver,查询还未分配的Node的Pod根据调度策略为这些Pod分配节点
1)创建kube-scheduler证书签名请求
kube-scheduler 连接 apiserver 需要使用的证书,同时本身 10259 端口也会使用此证书
[root@k8s-master01 ~]# vim /opt/k8s/certs/kube-scheduler-csr.json { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "localhost", "10.10.0.18", "10.10.0.19", "10.10.0.20" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai", "O": "system:kube-scheduler", "OU": "System" } ] }
2)生成kube-scheduler证书与私钥
[root@k8s-master01 ~]# cd /opt/k8s/certs/ [root@k8s-master01 certs]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/opt/k8s/certs/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler 2019/04/24 16:08:38 [INFO] generate received request 2019/04/24 16:08:38 [INFO] received CSR 2019/04/24 16:08:38 [INFO] generating key: rsa-2048 2019/04/24 16:08:38 [INFO] encoded CSR 2019/04/24 16:08:38 [INFO] signed certificate with serial number 288219277582790216633679349308422764913188390208 2019/04/24 16:08:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
3)查看证书
[root@k8s-master01 certs]# ll kube-scheduler* -rw-r--r-- 1 root root 1131 Apr 24 16:11 kube-scheduler.csr -rw-r--r-- 1 root root 345 Apr 24 16:03 kube-scheduler-csr.json -rw------- 1 root root 1679 Apr 24 16:11 kube-scheduler-key.pem -rw-r--r-- 1 root root 1505 Apr 24 16:11 kube-scheduler.pem
4)分发证书
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-scheduler-key.pem dest=/etc/kubernetes/ssl/' [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-scheduler.pem dest=/etc/kubernetes/ssl/'
5)生成配置文件kube-scheduler.kubeconfig
1、kube-scheduler 组件开启安全端口及 RBAC 认证所需配置2、kube-scheduler kubeconfig文件中包含Master地址信息与上一步创建的证书、私钥
## 设置集群参数 ### [root@k8s-master01 ~]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-scheduler.kubeconfig Cluster "kubernetes" set. ## 配置客户端认证参数 [root@k8s-master01 ~]# kubectl config set-credentials "system:kube-scheduler" --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig User "system:kube-scheduler" set. ## 配置上下文参数 [root@k8s-master01 ~]# kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig Context "system:kube-scheduler@kubernetes" created. ## 配置默认上下文 [root@k8s-master01 ~]# kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=kube-scheduler.kubeconfig Switched to context "system:kube-scheduler@kubernetes". ## 配置文件分发 [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/root/kube-scheduler.kubeconfig dest=/etc/kubernetes/config/'
6)编辑kube-scheduler核心文件
kube-shceduler 同 kube-controller manager 一样将不安全端口绑定在本地,安全端口对外公开
[root@k8s-master01 ~]# vim /opt/k8s/cfg/kube-scheduler.conf ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--address=127.0.0.1 --authentication-kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig --authorization-kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem --secure-port=10259 --leader-elect=true --port=10251 --tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem --v=2" ## 分发配置文件 [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/cfg/kube-scheduler.conf dest=/etc/kubernetes/config'
7)启动脚本
需要指定需要加载的配置文件路径
[root@k8s-master01 ~]# vim /opt/k8s/unit/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config/kube-scheduler.conf User=kube ExecStart=/usr/local/bin/kube-scheduler $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_MASTER $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ##脚本分发 [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/unit/kube-scheduler.service dest=/usr/lib/systemd/system/'
8)启动服务
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl daemon-reload' [root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl enable kube-scheduler.service' [root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl start kube-scheduler.service'
9)验证leader主机
[root@k8s-master01 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master03_e0e29681-666b-11e9-b086-000c2920229d","leaseDurationSeconds":15,"acquireTime":"2019-04-24T08:35:14Z","renewTime":"2019-04-24T08:36:08Z","leaderTransitions":0}' creationTimestamp: "2019-04-24T08:35:14Z" name: kube-scheduler namespace: kube-system resourceVersion: "11238" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: e17d5eee-666b-11e9-bdea-000c2920229d ## master03 为leader主机
10)验证master集群状态
在三个节点中,任一主机执行以下命令,都应返回集群状态信息
[root@k8s-master02 config]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}