etcd集群配置
master节点配置
1.安装kubernetes etcd
1 [root@k8s ~]# yum -y install kubernetes-master etcd
2.配置 etcd 选项
1 [root@k8s ~]# cat /etc/etcd/etcd.conf 2 #[Member] 3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 4 ETCD_LISTEN_PEER_URLS="http://172.19.15.92:2380" 5 ETCD_LISTEN_CLIENT_URLS="http://172.19.15.92:2379,http://127.0.0.1:2379" 6 ETCD_MAX_SNAPSHOTS="5" 7 ETCD_NAME="etcd1" 8 ETCD_HEARTBEAT_INTERVAL=6000 9 ETCD_ELECTION_TIMEOUT=30000 10 11 #[Clustering] 12 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.19.15.92:2380" 13 ETCD_ADVERTISE_CLIENT_URLS="http://172.19.15.92:2379" 14 ETCD_INITIAL_CLUSTER="etcd1=http://172.19.15.92:2380,etcd2=http://172.19.15.93:2380,etcd3=http://172.19.15.94:2380"
nodes节点配置
1.安装部署kubernetes-node /etcd /flannel /docker
1 [root@k8s-node1 ~]# yum -y install kubernetes-node etcd flannel docker
2.分别配置etcd,node1 与 node2 的配置方法相同,以 node1 配置文件为例说明
1 [root@k8s-node1 ~]# cat /etc/etcd/etcd.conf 2 #[Member] 3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 4 ETCD_LISTEN_PEER_URLS="http://172.19.15.93:2380" 5 ETCD_LISTEN_CLIENT_URLS="http://172.19.15.93:2379,http://127.0.0.1:2379" 6 ETCD_NAME="etcd2" 7 ETCD_HEARTBEAT_INTERVAL=6000 8 ETCD_ELECTION_TIMEOUT=30000 9 10 #[Clustering] 11 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.19.15.93:2380" 12 ETCD_ADVERTISE_CLIENT_URLS="http://172.19.15.93:2379" 13 ETCD_INITIAL_CLUSTER="etcd1=http://172.19.15.92:2380,etcd2=http://172.19.15.93:2380,etcd3=http://172.19.15.94:2380"
启动etcd cluster
分别在3台服务器启动etcd
1 [root@k8s ~]# systemctl start etcd.service 2 [root@k8s ~]# systemctl status etcd.service -l 3 ● etcd.service - Etcd Server 4 Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) 5 Active: active (running) since 二 2018-07-03 18:13:06 CST; 16h ago 6 Main PID: 2085 (etcd) 7 Tasks: 31 8 Memory: 328.4M 9 CGroup: /system.slice/etcd.service 10 └─2085 /usr/bin/etcd --name=etcd1 --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://172.19.15.92:2379,http://127.0.0.1:2379
查看etcd集群状态
1 [root@k8s ~]# etcdctl cluster-health 2 member 8c24796af2c20350 is healthy: got healthy result from http://172.19.15.94:2379 3 member e66597512233d97d is healthy: got healthy result from http://172.19.15.93:2379 4 member edfc36869b54e803 is healthy: got healthy result from http://172.19.15.92:2379 5 cluster is healthy
Kubernetes集群配置
master节点配置
1.apiserver配置文件修改,注意KUBE_ADMISSION_CONTROL选项的参数配置
1 [root@k8s ~]# cat /etc/kubernetes/apiserver 2 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" 3 KUBE_API_PORT="--port=8080" 4 KUBELET_PORT="--kubelet-port=10250" 5 KUBE_ETCD_SERVERS="--etcd-servers=http://172.19.15.92:2379,http://172.19.15.93:2379,http://172.19.15.94:2379" 6 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" 7 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" 8 KUBE_API_ARGS=""
2.启动服务
1 [root@k8s ~]# systemctl start kube-apiserver 2 [root@k8s ~]# systemctl start kube-controller-manager 3 [root@k8s ~]# systemctl start kube-scheduler 4 [root@k8s ~]# systemctl enable kube-apiserver 5 [root@k8s ~]# systemctl enable kube-controller-manager 6 [root@k8s ~]# systemctl enable kube-scheduler
nodes节点配置
1.配置config配置,node1&node2配置相同,以node1为例说明
1 [root@k8s-node1 ~]# cat /etc/kubernetes/config 2 KUBE_LOGTOSTDERR="--logtostderr=true" 3 KUBE_LOG_LEVEL="--v=0" 4 KUBE_ALLOW_PRIV="--allow-privileged=false" 5 KUBE_MASTER="--master=http://172.19.15.92:8080"
2.配置kubelet
1 [root@k8s-node1 ~]# cat /etc/kubernetes/kubelet 2 KUBELET_ADDRESS="--address=127.0.0.1" 3 KUBELET_HOSTNAME="--hostname-override=172.19.15.93" 4 KUBELET_API_SERVER="--api-servers=http://172.19.15.92:8080" 5 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" 6 KUBELET_ARGS=""
3.docker service配置文件
1 [root@k8s-node1 ~]# cat /usr/lib/systemd/system/docker.service 2 [Unit] 3 Description=Docker Application Container Engine 4 Documentation=https://docs.docker.com 5 After=network.target firewalld.service 6 7 [Service] 8 Type=notify 9 Environment="http_proxy=http://192.168.59.241:8888/" "https_proxy=https://192.168.59.241:8888/" 10 ExecStart=/usr/bin/dockerd --registry-mirror=http://f2d6cb40.m.daocloud.io --bip=192.100.90.1/24 11 ExecReload=/bin/kill -s HUP $MAINPID 12 LimitNOFILE=infinity 13 LimitNPROC=infinity 14 LimitCORE=infinity 15 TimeoutStartSec=0 16 Delegate=yes 17 KillMode=process 18 19 [Install] 20 WantedBy=multi-user.target
网络配置
这里使用flannel进行网络配置,已经在2个节点上安装,下面进行配置。
在节点上进行配置flannel
在节点上进行配置flannel
1 [root@k8s-node1 ~]# cat /etc/sysconfig/flanneld 2 FLANNEL_ETCD_ENDPOINTS="http://172.19.15.92:2379,http://172.19.15.93:2379,http://172.19.15.94:2379" 3 FLANNEL_ETCD_PREFIX="/k8s/network" 4 FLANNEL_OPTIONS="--logtostderr=true --log_dir=/var/log/k8s/flannel/ --etcd-prefix=/k8s/network --etcd-endpoints=http://172.19.15.92:2379,http://172.19.15.93:2379,http://172.19.15.94:2379 --iface=ens160"
master 节点需要配置 etcd 网络:
1 etcdctl set /k8s/network/config '{"Network":"192.100.0.1/16"}'
2.启动服务
1 [root@k8s-node1 ~]# systemctl start kubelet 2 [root@k8s-node1 ~]# systemctl start docker 3 [root@k8s-node1 ~]# systemctl start flanneld 4 [root@k8s-node1 ~]# systemctl enable kubelet 5 [root@k8s-node1 ~]# systemctl enable docker 6 [root@k8s-node1 ~]# systemctl enable flanneld
查看集群状态
1 [root@k8s ~]# kubectl get nodes 2 NAME STATUS AGE 3 172.19.15.92 Ready 16h 4 172.19.15.93 Ready 1d 5 172.19.15.94 Ready 1d 6 [root@k8s ~]# etcdctl member list 7 8c24796af2c20350: name=etcd3 peerURLs=http://172.19.15.94:2380 clientURLs=http://172.19.15.94:2379 isLeader=false 8 e66597512233d97d: name=etcd2 peerURLs=http://172.19.15.93:2380 clientURLs=http://172.19.15.93:2379 isLeader=false 9 edfc36869b54e803: name=etcd1 peerURLs=http://172.19.15.92:2380 clientURLs=http://172.19.15.92:2379 isLeader=true 10 [root@k8s ~]# etcdctl cluster-health 11 member 8c24796af2c20350 is healthy: got healthy result from http://172.19.15.94:2379 12 member e66597512233d97d is healthy: got healthy result from http://172.19.15.93:2379 13 member edfc36869b54e803 is healthy: got healthy result from http://172.19.15.92:2379 14 cluster is healthy
更改 docker 网段为 flannel 分配的网段
1 # export FLANNEL_SUBNET=10.254.26.1/24 2 # cat << EOF > /etc/docker/daemon.json 3 { 4 "bip" : "$FLANNEL_SUBNET" 5 } 6 EOF 7 # systemctl daemon-reload 8 # systemctl restart docker