• kubernetes 单节点和多节点环境搭建


    kubernetes单节点环境搭建:

    1.在VMWare Workstation中建立一个centos 7虚拟机。虚拟机的配置尽量调大一些

    2.操作系统安装完成后,关闭centos 自带的防火墙服务

    systemctl disable firewalld

    systemctl stop firewalld

    3.安装etcd 和kubernetes软件(会自动安装docker软件)

    yum install -y etcd kubernetes

    4.安装好软件后,修改两个配置文件(其他配置文件使用默认的就好)

    docker配置文件为/etc/sysconfig/docker,其中OPTIONS的内容设置为

    OPTINOS='--selinux-enabled=false --insecure-registry gcr.io'

    kubernetes apiserver配置文件为/etc/kubernetes/apiserver,把--admission_control参数中的ServiceAccount删除。

    5.按顺序启动所有服务

    systemctl start etcd

    systemctl start docker

    systemctl start kube-apiserver

    systemctl start kube-controller-manager

    systemctl start kube-scheduler

    systemctl start kubelet

    systemctl start kube-proxy

    至此一个单机版的kubernetes集群环境就安装成功了。 

    kubernetes多节点环境搭建:

    多节点环境分为master节点(master节点上运行etcd 、api-server、controller-manager、scheduler、kubelet、proxy)和node节点(node节点上运行kubelet、proxy

    1.在VMWare Workstation中建立3个centos 7虚拟机。虚拟机的配置尽量调大一些

    2.操作系统安装完成后,关闭centos 自带的防火墙服务(三个节点上都要执行)

    systemctl disable firewalld

    systemctl stop firewalld

    3.安装etcd 和kubernetes软件(会自动安装docker软件,三个节点上都要执行)

    yum install -y etcd kubernetes

    4.软件安装完成后分别在master节点和node节点上做以下配置

    配置master节点:
    配置etcd服务:
    /etc/etcd/etcd.conf
    ETCD_LISTEN_CLIENT_URLS="http://192.168.1.10:2379" ---etcd监听的地址和端口号   <master节点的ip>:<2379 >端口默认就是2379
    配置kube-apiserver服务:
    /etc/kubernetes/apiserver
    KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379" -----etcd的地址     <master节点的ip>:<2379 >端口默认就是2379

    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"   这里设置为0.0.0.0
    controller-manager服务和kube-scheduler服务不需要配置
    配置conifg服务:
    KUBE_MASTER="--master=http://192.168.1.10:8080" ---master地址  <master节点的ip>:<8080 >端口默认就是8080
    配置docker服务:
    /etc/sysconfig/docker文件改为:
    #OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
    OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry gcr.io'
    /etc/kubernetes/apiserver文件中吧--admission_control参数中的ServiceAcount删除
    #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

    配置完成后,在master节点上执行

    systemctl daemon-reload

    systemctl enable etcd
    systemctl start etcd
    systemctl enable docker
    systemctl start docker
    systemctl enable kube-apiserver
    systemctl start kube-apiserver
    systemctl enable kube-controller-manager
    systemctl start kube-controller-manager
    systemctl enable kube-scheduler
    systemctl start kube-scheduler

    systemctl enable kubelet
    systemctl start kubelet

    systemctl enable kube-proxy
    systemctl start kube-proxy

    etcdctl cluster-health ---验证etcd是否正确启动

    配置node节点:
    配置kubelet:
    /etc/kubernetes/kubelet
    KUBELET_HOSTNAME="--hostname-override=192.168.1.10" -----------kubelet的hostname
    KUBELET_API_SERVER="--api-servers=http://192.168.1.10:8080" ----apiserver地址 <master节点的ip>:<8080 >端口默认就是8080
    kube-proxy服务不需要配置
    配置config:
    /etc/kubernetes/config
    KUBE_MASTER="--master=http://192.168.1.10:8080"   <master节点的ip>:<8080 >端口默认就是8080
    /etc/kubernetes/
    systemctl enable docker
    systemctl start docker
    systemctl enable kubelet
    systemctl start kubelet
    systemctl enable kube-proxy
    systemctl start kube-proxy

    至此node节点配置完成。

    在master节点上执行kubernetes get nodes查看是否将node节点管理起来,如果管理不起来,

    在master节点上执行

    systemctl enable etcd
    systemctl restart etcd
    systemctl enable docker
    systemctl restart docker
    systemctl enable kube-apiserver
    systemctl restart kube-apiserver
    systemctl enable kube-controller-manager
    systemctl restart kube-controller-manager
    systemctl enable kube-scheduler
    systemctl restart kube-scheduler

    在node节点上执行

    systemctl enable docker
    systemctl restart docker
    systemctl enable kubelet
    systemctl restart kubelet
    systemctl enable kube-proxy
    systemctl restart kube-proxy

    然后在执行kubernetes get nodes验证结果。

    后续配置根据需要:

    服务名 cmd --help -----查看每个服务的配置参数,例如:
    kubelet cmd --help
    kube-controller-manager cmd --help

    ks集群网络配置ovs
    1.设置node1和node2节点上的docker0网桥的网段
    [root@localhost docker]# cat /etc/docker/daemon.json
    {"registry-mirrors": ["http://55488037.m.daocloud.io"], "bip": "172.17.42.1/24"}
    2.禁止selinux设置:/etc/selinux/config
    SELINUX=disabled
    3.yum install policycoreutils-python
    semanage fcontext -a -t openvswitch_rw_t "/etc/openvswitch(/.*)?"
    restorecon -Rv /etc/openvswitch
    service openvswitch status
    yum install bridge-utils

    ovs-vsctl set interface gre2 type=gre option:remote_ip=192.168.1.106
    brctl addif docker0 br0
    ip link set dev br0 up
    ip link set dev docker0 up
    ip route add 172.17.0.0/16 dev docker0
    iptables -t nat -F
    iptables -F

    FAQ

    1.重启服务命令

    systemctl restart etcd
    systemctl restart docker
    systemctl restart kube-apiserver
    systemctl restart kube-controller-manager
    systemctl restart kube-scheduler

    每天做好自己该做的事情,你就不会感到迷茫。
  • 相关阅读:
    DailyTick 开发实录 —— UI 设计
    CoreCRM 开发实录 —— 单元测试之 Mock UserManager 和 SignInManager
    CoreCRM 开发实录 —— 单元测试、测试驱动开发和在线服务
    CoreCRM 开发实录 —— Profile
    DailyTick 开发实录 —— 开始
    2016年年终总结
    centos7下mongodb4集群安装
    centos7下elasitcsearch7集群安装
    centos es2.x安装
    centos jdk切换
  • 原文地址:https://www.cnblogs.com/sosogengdongni/p/8496982.html
Copyright © 2020-2023  润新知