rancher部署K8S对内核有要求,要求5.4以上版本的内核
cat >update-kernel.sh <<EOF #!/bin/bash sudo yum install -y https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm sudo yum info --enablerepo=elrepo-kernel kernel-lt kernel-ml sudo yum install --skip-broken --enablerepo=elrepo-kernel -y kernel-lt kernel-lt-headers #把刚安装的内核加载到系统 sudo grub2-mkconfig -o /boot/grub2/grub.cfg #查看可用内核 sudo awk -F\' '=="menuentry " {print i++ " : " }' /etc/grub2.cfg #设置启动内核 sudo grub2-set-default 0 #重启生效 echo 3 mini is reboot ... && sleep 3 reboot EOF sh update-kernel.sh
安装基础软件和增加描述符数量
yum install lrzsz lsof wget ntpdate -y echo "root soft nproc 655350" >> /etc/security/limits.conf echo "root hard nproc 655350" >> /etc/security/limits.conf echo "root soft nofile 655350" >> /etc/security/limits.conf echo "root hard nofile 655350" >> /etc/security/limits.conf
安装docker(每台机器都需要安装)
yum install -y yum-utils yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io -y mkdir -p /etc/docker/ cat >/etc/docker/daemon.json <<EOF { "data-root": "/data/docker", "log-driver":"json-file", "log-opts": {"max-size":"200m", "max-file":"10"} } EOF cat /etc/docker/daemon.json systemctl enable docker && systemctl start docker docker info | egrep "Docker Root Dir|Server Version"
编辑hosts, 在部署rancher的master节点,10.11.6.217上操作
cat >/etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.11.6.217 k8s-master01 10.11.6.218 k8s-master02 10.11.6.219 k8s-master03 10.11.6.211 k8s-node-01 10.11.6.212 k8s-node-01 10.11.6.216 k8s-node-03 10.11.6.214 k8s-node-04 10.11.6.215 k8s-node-04 10.11.6.210 k8s-node-05 10.11.6.213 k8s-node-06 10.11.6.224 k8s-node-07 10.11.6.228 k8s-node-08 10.11.6.229 k8s-node-09 10.11.6.230 k8s-node-10 EOF
每台机器都需要创建允许免密用户进行免密远程安装,RKE用户
useradd RKE echo "123456" | passwd --stdin RKE 将RKE用户添加到docker组 cat >ip.txt <<EOF 10.11.6.218 10.11.6.219 10.11.6.211 10.11.6.212 10.11.6.216 10.11.6.214 10.11.6.215 10.11.6.210 10.11.6.213 10.11.6.224 10.11.6.228 10.11.6.229 10.11.6.230 EOF for i in `cat ip.txt`;do ssh root@$i "usermod RKE -g RKE -G RKE,docker && id RKE";done
创建密钥并分发到对应机器
su - RKE ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa >/dev/null 2>&1 #每台机器都要执行一边 cp ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys su - RKE scp -P 22 authorized_keys RKE@10.11.6.211:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.216:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.214:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.215:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.210:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.213:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.224:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.228:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.229:/home/RKE/.ssh/ scp -P 22 authorized_keys RKE@10.11.6.230:/home/RKE/.ssh/
下载RKE软件
wget https://github.com/rancher/rke/releases/download/v1.2.4/rke_linux-amd64 chmod +x rke_linux-amd64
生成模板文件
[RKE@k8s-master01 ~]$ ./rke_linux-amd64 config [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: /home/RKE/.ssh/id_dsa [+] Number of Hosts [1]: 2 [+] SSH Address of host (1) [none]: 10.11.6.217 [+] SSH Port of host (1) [22]: 37822 [+] SSH Private Key Path of host (10.11.6.217) [none]: /home/RKE/.ssh/id_dsa [+] SSH User of host (10.11.6.217) [ubuntu]: RKE [+] Is host (10.11.6.217) a Control Plane host (y/n)? [y]: Y [+] Is host (10.11.6.217) a Worker host (y/n)? [n]: n [+] Is host (10.11.6.217) an etcd host (y/n)? [n]: y [+] Override Hostname of host (10.11.6.217) [none]: k8s-master01 [+] Internal IP of host (10.11.6.217) [none]: [+] Docker socket path on host (10.11.6.217) [/var/run/docker.sock]: [+] SSH Address of host (2) [none]: 10.11.6.212 [+] SSH Port of host (2) [22]: 37822 [+] SSH Private Key Path of host (10.11.6.212) [none]: /home/RKE/.ssh/id_dsa [+] SSH User of host (10.11.6.212) [ubuntu]: RKE [+] Is host (10.11.6.212) a Control Plane host (y/n)? [y]: n [+] Is host (10.11.6.212) a Worker host (y/n)? [n]: y [+] Is host (10.11.6.212) an etcd host (y/n)? [n]: n [+] Override Hostname of host (10.11.6.212) [none]: k8s-node-gs01 [+] Internal IP of host (10.11.6.212) [none]: [+] Docker socket path on host (10.11.6.212) [/var/run/docker.sock]: [+] Network Plugin Type (flannel, calico, weave, canal) [canal]: calico [+] Authentication Strategy [x509]: [+] Authorization Mode (rbac, none) [rbac]: [+] Kubernetes Docker image [rancher/hyperkube:v1.19.4-rancher1]: [+] Cluster domain [cluster.local]: [+] Service Cluster IP Range [10.43.0.0/16]: [+] Enable PodSecurityPolicy [n]: [+] Cluster Network CIDR [10.42.0.0/16]: [+] Cluster DNS Service IP [10.43.0.10]: [+] Add addon manifest URLs or YAML files [no]: [RKE@k8s-master01 ~]$ ls cluster.yml rke_linux-amd64 [RKE@k8s-master01 ~]$
将所有的节点都填写进,注意,需要将免密用户和docker提前加入到组中
编辑yaml部署文件 cat cluster.yml # If you intened to deploy Kubernetes in an air-gapped environment, # please consult the documentation on how to configure custom RKE images. nodes: - address: 10.11.6.217 port: "37822" internal_address: "" role: - controlplane - etcd hostname_override: k8s-master01 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.218 port: "37822" internal_address: "" role: - controlplane - etcd hostname_override: k8s-master02 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.219 port: "37822" internal_address: "" role: - controlplane - etcd hostname_override: k8s-master03 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] ######################### nodes ##################################### - address: 10.11.6.212 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-01 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.211 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-02 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.216 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-03 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.214 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-04 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.215 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-05 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.210 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-06 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.213 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-07 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.224 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-08 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.228 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-09 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.229 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-10 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 10.11.6.230 port: "37822" internal_address: "" role: - worker hostname_override: k8s-node-11 user: RKE docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] ########################### servies ######################## services: etcd: image: "" extra_args: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_binds: [] win_extra_env: [] external_urls: [] ca_cert: "" cert: "" key: "" path: "" uid: 0 gid: 0 snapshot: null retention: "" creation: "" backup_config: null kube-api: image: "" extra_args: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_binds: [] win_extra_env: [] service_cluster_ip_range: 10.43.0.0/16 service_node_port_range: "" pod_security_policy: false always_pull_images: false secrets_encryption_config: null audit_log: null admission_configuration: null event_rate_limit: null kube-controller: image: "" extra_args: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_binds: [] win_extra_env: [] cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 scheduler: image: "" extra_args: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_binds: [] win_extra_env: [] kubelet: image: "" extra_args: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_binds: [] win_extra_env: [] cluster_domain: cluster.local infra_container_image: "" cluster_dns_server: 10.43.0.10 fail_swap_on: false generate_serving_certificate: false kubeproxy: image: "" extra_args: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_binds: [] win_extra_env: [] network: plugin: canal options: {} mtu: 0 node_selector: {} update_strategy: null authentication: strategy: x509 sans: # 注意,这里是允许访问的IP白名单 - "x.x.x.x" - "x.x.x.x" - "x.x.x.x" webhook: null addons: "" addons_include: [] system_images: etcd: rancher/coreos-etcd:v3.4.13-rancher1 alpine: rancher/rke-tools:v0.1.66 nginx_proxy: rancher/rke-tools:v0.1.66 cert_downloader: rancher/rke-tools:v0.1.66 kubernetes_services_sidecar: rancher/rke-tools:v0.1.66 kubedns: rancher/k8s-dns-kube-dns:1.15.10 dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.10 kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.10 kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.8.1 coredns: rancher/coredns-coredns:1.7.0 coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.8.1 nodelocal: rancher/k8s-dns-node-cache:1.15.13 kubernetes: rancher/hyperkube:v1.19.4-rancher1 flannel: rancher/coreos-flannel:v0.13.0-rancher1 flannel_cni: rancher/flannel-cni:v0.3.0-rancher6 calico_node: rancher/calico-node:v3.16.1 calico_cni: rancher/calico-cni:v3.16.1 calico_controllers: rancher/calico-kube-controllers:v3.16.1 calico_ctl: rancher/calico-ctl:v3.16.1 calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.16.1 canal_node: rancher/calico-node:v3.16.1 canal_cni: rancher/calico-cni:v3.16.1 canal_controllers: rancher/calico-kube-controllers:v3.16.1 canal_flannel: rancher/coreos-flannel:v0.13.0-rancher1 canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.16.1 weave_node: weaveworks/weave-kube:2.7.0 weave_cni: weaveworks/weave-npc:2.7.0 pod_infra_container: rancher/pause:3.2 ingress: rancher/nginx-ingress-controller:nginx-0.35.0-rancher2 ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 metrics_server: rancher/metrics-server:v0.3.6 windows_pod_infra_container: rancher/kubelet-pause:v0.1.4 aci_cni_deploy_container: noiro/cnideploy:5.1.1.0.1ae238a aci_host_container: noiro/aci-containers-host:5.1.1.0.1ae238a aci_opflex_container: noiro/opflex:5.1.1.0.1ae238a aci_mcast_container: noiro/opflex:5.1.1.0.1ae238a aci_ovs_container: noiro/openvswitch:5.1.1.0.1ae238a aci_controller_container: noiro/aci-containers-controller:5.1.1.0.1ae238a aci_gbp_server_container: noiro/gbp-server:5.1.1.0.1ae238a aci_opflex_server_container: noiro/opflex-server:5.1.1.0.1ae238a ssh_key_path: /home/RKE/.ssh/id_dsa ssh_cert_path: "" ssh_agent_auth: false authorization: mode: rbac options: {} ignore_docker_version: null kubernetes_version: "" private_registries: [] ingress: provider: "" options: {} node_selector: {} extra_args: {} dns_policy: "" extra_envs: [] extra_volumes: [] extra_volume_mounts: [] update_strategy: null http_port: 0 https_port: 0 network_mode: "" tolerations: [] default_backend: null cluster_name: "" cloud_provider: name: "" prefix_path: "" win_prefix_path: "" addon_job_timeout: 0 bastion_host: address: "" port: "" user: "" ssh_key: "" ssh_key_path: "" ssh_cert: "" ssh_cert_path: "" monitoring: provider: "" options: {} node_selector: {} update_strategy: null replicas: null tolerations: [] restore: restore: false snapshot_name: "" rotate_encryption_key: false dns: null
确认yaml没问题,注意点:主机名、IP、节点使用明细是master还是node节点,开始安装 su - RKE sudo ./rke_linux-amd64 up --config ./cluster.yml 正常情况下完成之后会输出 INFO[0033] Finished building Kubernetes cluster successfully
#在K8Smaster节点上创建和下载kubectl命令并把集群管理配置文件放到节点机器的/root/.kube/config [RKE@10.11.6.217 ~]$ ls cluster.rkestate cluster.yml kube_config_cluster.yml rke_linux-amd64 部署完成后需要将生成的配置文件和证书文件备份,避免在操作的过程中出现失误导致整个集群不可用,rancher部署的集群证书有效期是10年
mkdir /root/.kube/ cp /home/RKE/kube_config_cluster.yml /root/.kube/config curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl #验证是否成功 kubectl get nodes kubectl get pods -A
部署kuboard页面
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml kubectl get pods -n kube-system | egrep "kuboard|metrics-server" kuboard-59bdf4d5fb-cmg58 1/1 Running 0 6m metrics-server-78664db96b-nbrfb 1/1 Running 0 6m
# 获取web页面登录的token # 如果您参考 www.kuboard.cn 提供的文档安装 Kuberenetes,可在第一个 Master 节点上执行此命令 echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d) #浏览器输入master的IP或者enp IP访问即可 http://xxx.xxx.xxx.xxx:32567 输入获取到的token就可以脱离命令行操作了
备份ETCD
RKE的命令行工具已经包含备份的功能 只需要敲2行代码即可实现 备份 rke etcd snapshot-save --config cluster.yml --name snapshot-name 该命令会在 /opt/rke/etcd-snapshots 目录下输出备份 还原 rke etcd snapshot-restore --config cluster.yml --name mysnapshot 该命令会使用/opt/rke/etcd-snapshots 目录下的名为mysnapshot的备份进行还原
RKE部署 K8S