前面一篇已经安装好了ETCD、docker与flannel(k8s1.4.3安装实践记录(1)),现在可以开始安装k8s了
1、K8S
目前centos yum上的kubernetes还是1.2.0,因此我们只能是使用下载的安装包,进行
kubernetes的安装
[root@bogon system]# yum list |grep kubernetes cockpit-kubernetes.x86_64 0.114-2.el7.centos extras kubernetes.x86_64 1.2.0-0.13.gitec7364b.el7 extras kubernetes-client.x86_64 1.2.0-0.13.gitec7364b.el7 extras kubernetes-cni.x86_64 0.3.0.1-0.07a8a2 kubelet kubernetes-master.x86_64 1.2.0-0.13.gitec7364b.el7 extras kubernetes-node.x86_64 1.2.0-0.13.gitec7364b.el7 extras kubernetes-unit-test.x86_64 1.2.0-0.13.gitec7364b.el7 extras
1.1 K8S下载
使用wget 或者下载软件下载k8s安装包:https://github.com/kubernetes/kubernetes/releases/download/v1.4.3/kubernetes.tar.gz,下载完成,我们会拿到当前1.4.3版本的安装文件。
1.2 解压并安装
解压安装包,并将执行文件存放到合适的地方
tar -zxvf kubernetes.tar.gz cd kubernetes/server/bin mkidr /usr/local/kube cp -R * /usr/local/kube
设置环境变量文件/etc/profile,将kube执行文件加入的环境变量中
export KUBE_PATH=/usr/local/kube
export PATH=$PATH:$KUBE_PATH
执行环境变量,使其生效:
source /etc/profile
1.3 启动主节点
当前主节点为192.168.37.130,需要在主节点上执行kube-apiserver ,kube-controller-manager,kube-scheduler三个进程。
1.3.1 开放端口
如果没有关闭防火墙且使用的是firewalld,则需要开放相关的端口
firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --zone=public --add-port=10250/tcp --permanent firewall-cmd --zone=public --add-port=6443/tcp --permanent firewall-cmd --zone=public --add-port=15441/tcp --permanent firewall-cmd --reload firewall-cmd --list-all
1.3.2 启动kube-apiserver
kube-apiserver --insecure-bind-address=192.168.37.130 --insecure-port=8080 --service-cluster-ip-range='192.168.37.130/24' --log_dir=/usr/local/kubernete_test/logs/kube --v=0 --logtostderr=false --etcd_servers=http://192.168.37.130:2379,http://192.168.37.131:2379 --allow_privileged=false
1.3.3 启动 kube-controller-manager
kube-controller-manager --v=0 --logtostderr=true --log_dir=/data/kubernets/logs/kube-controller-manager/ --master=http://192.168.37.130:8080
1.3.4 启动kube-scheduler
kube-scheduler --master='192.168.37.130:8080' --v=0 --log_dir=/data/kubernets/logs/kube-scheduler
1.3.5 查看是否启动完成
[root@bogon ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
可以看到两个etcd启动完成
1.3.6 设置service
在/usr/lib/systemd/system文件夹中创建各个进程的service文件
1、kube-apiserver.service
[Unit] Description=kube-apiserver Documentation=http://kubernetes.io/docs/ [Service] EnvironmentFile=-/etc/sysconfig/kubernets/kube-apiserver ExecStart=/usr/local/kube/kube-apiserver ${INSECURE_BIND_ADDRESS} ${INSECURE_PORT} ${SERVICE_CLUSTER_IP_RANGE} ${LOG_DIR} ${VERSION} ${LOGTOSTDERR} ${ETCD_SERVERS} ${ALLOW_PRIVILEGED} KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target
其对应的配置文件/etc/sysconfig/kubernets/kube-apiserver如下:
#INSECURE_BIND_ADDRESS="--insecure-bind-address=0.0.0.0" INSECURE_BIND_ADDRESS="--address=0.0.0.0" INSECURE_PORT="--insecure-port=8080" SERVICE_CLUSTER_IP_RANGE="--service-cluster-ip-range=172.16.0.0/16" LOG_DIR="--log_dir=/usr/local/kubernete_test/logs/kube" VERSION="--v=0" LOGTOSTDERR="--logtostderr=false" ETCD_SERVERS="--etcd_servers=http://192.168.37.130:2379,http://192.168.37.131:2379" ALLOW_PRIVILEGED="--allow-privileged=false" ADMISSION_CONTROL="--admission-control=NamespaceAutoProvision,ServiceAccount,LimitRanger,ResourceQuota"
注意:配置文件中的参数名不能使用“-”
2016年11月3日补充:在刚开始配置的时候将INSECURE_BIND_ADDRESS="--insecure-bind-address=192.168.37.130",但这样会存在一个使用本地链路127.0.0.1:8080去访问的是被拒绝掉,而修改为INSECURE_BIND_ADDRESS="--address=0.0.0.0"就不会存在这种问题,具体见:http://www.cnblogs.com/lyzw/p/6023935.html
2、kube-controller-manager
配置kube-controller-manager.service
[Unit]
Description=kube-controller-manager
Documentation=http://kubernetes.io/docs/
[Service]
EnvironmentFile=-/etc/sysconfig/kubernets/kube-controller-manager
ExecStart=/usr/local/kube/kube-controller-manager ${VERSION} ${LOGTOSTDERR} ${LOG_DIR} ${MASTER}
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
设置/etc/sysconfig/kubernets/kube-controller-manager
VERSION="--v=0" LOGTOSTDERR="--logtostderr=true" LOG_DIR="--log_dir=/data/kubernets/logs/kube-controller-manager/" MASTER="--master=http://192.168.37.130:8080"
3、设置kube-scheduler服务
kube-scheduler.service
[Unit] Description=kube-scheduler Documentation=http://kubernetes.io/docs/ [Service] EnvironmentFile=-/etc/sysconfig/kubernets/kube-scheduler ExecStart=/usr/local/kube/kube-scheduler ${VERSION} ${LOGTOSTDERR} ${LOG_DIR} ${MASTER} KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target
配置文件如下
VERSION="--v=0" LOGTOSTDERR="--logtostderr=true" LOG_DIR="--log_dir=/data/kubernets/logs/kube-scheduler" MASTER="--master=http://192.168.37.130:8080
4、重启各个服务
systemctl daemon-reload systemctl start kube-apiserver systemctl start kube-controller-manager systemctl start kube-scheduler
1.4 启动minion
Minion需要启动kube-proxy,kubelet两个进程
1.4.1 kube-proxy启动
#在两台机器都执行 kube-proxy --logtostderr=true --v=0 --master=http://192.168.37.130:8080
1.4.1 kubelet启动
kubelet --logtostderr=true --v=0 --allow-privileged=false --log_dir=/data/kubernets/logs/kubelet --address=0.0.0.0 --port=10250 --hostname_override=192.168.37.130 --api_servers=http://192.168.37.130:8080
1.4.5配置service
1、kube-proxy.service
[Unit] Description=kube-proxy Documentation=http://kubernetes.io/docs/ [Service] EnvironmentFile=-/etc/sysconfig/kubernets/kube-proxy ExecStart=/usr/local/kube/kube-proxy ${VERSION} ${LOGTOSTDERR} ${LOG_DIR} ${MASTER} KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target
/etc/sysconfig/kubernets/kube-proxy
VERSION="--v=0" LOGTOSTDERR="--logtostderr=true" LOG_DIR="--log_dir=/data/kubernets/logs/kube-controller-manager/" MASTER="--master=http://192.168.37.130:8080"
2、kubelet.service
[Unit] Description=kubelet Documentation=http://kubernetes.io/docs/ [Service] EnvironmentFile=-/etc/sysconfig/kubernets/kubelet ExecStart=/usr/local/kube/kubelet ${LOGTOSTDERR} ${VERSION} ${ALLOW_PRIVILEGED} ${LOG_DIR} ${ADDRESS} ${PORT} ${HOSTNAME_OVERRIDE} ${API_SERVERS} KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target
/etc/sysconfig/kubernets/kubelet配置文件
LOGTOSTDERR="--logtostderr=true" VERSION="--v=0" ALLOW_PRIVILEGED="--allow-privileged=false" LOG_DIR="--log_dir=/data/kubernets/logs/kubelet" ADDRESS="--address=0.0.0.0" PORT="--port=10250" HOSTNAME_OVERRIDE="--hostname_override=192.168.37.131" API_SERVERS="--api_servers=http://192.168.37.130:8080"
经过如上的步骤,k8s基本上已经安装好了,后续在把dashboard给安装上。
问题:
1、采用网上的配置,执行会产生一些警告信息:
[root@bogon server]# kube-apiserver --address=192.168.37.130 --insecure-port=8080 --service-cluster-ip-range='192.168.37.130/24' --log_dir=/usr/local/kubernete_test/logs/kube --kubelet_port=10250 --v=0 --logtostderr=false --etcd_servers=http://192.168.37.130:2379,http://192.168.37.131:2379 --allow_privileged=false Flag --address has been deprecated, see --insecure-bind-address instead. Flag --kubelet-port has been deprecated, kubelet-port is deprecated and will be removed. [restful] 2016/11/01 15:31:15 log.go:30: [restful/swagger] listing is available at https://192.168.37.130:6443/swaggerapi/ [restful] 2016/11/01 15:31:15 log.go:30: [restful/swagger] https://192.168.37.130:6443/swaggerui/ is mapped to folder /swagger-ui/
2、执行kubectl get componentstatuses报错
[root@bogon ~]# kubectl get componentstatuses The connection to the server localhost:8080 was refused - did you specify the right host or port?
如上问题,如果在master机器上执行,则是因为/etc/hosts文件没有配置导致,在文件中加入ip localhost条目即可。
如果是在从节点上,在执行kubectl get componentstatuses语句时候,加上kubectl -s $masterIP:$port get componentstatuses,其中$masterIP为主节点IP,$port为主节点的服务IP,即安装文档中的8080端口,如:kubectl -s http://192.168.37.130:8080 get componentstatuses