• linux运维、架构之路-Kubernetes集群部署


    一、kubernetes介绍

           Kubernetes简称K8s,它是一个全新的基于容器技术的分布式架构领先方案。Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg)。在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。

           Kubernetes是一个完备的分布式系统支撑平台,具有完备的集群管理能力,多扩多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和发现机制、內建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制以及多粒度的资源配额管理能力。同时Kubernetes提供完善的管理工具,涵盖了包括开发、部署测试、运维监控在内的各个环节。

    二、Kubernetes架构和组件

    三、K8S特性

           Kubernetes是为生产环境而设计的容器调度管理系统,对于负载均衡、 服务发现、高可用、滚动升级、自动伸缩等容器云平台的功能要求有原生支持。
          一个K8s集群是由分布式存储(etcd)、服务节点(Minion, etcd现在称为Node)和控制节点(Master)构成的。所有的集群状态都保存在etcd中,Master节点上则运行集群的管理控制模块。Node节点是真正运行应用容器的主机节点,在每个Minion节点上都会运行一个Kubelet代理,控制该节点上的容器、镜像和存储卷等。

    Kubernetes功能:
    1. 自动化容器的部署和复制
    2. 随时扩展或收缩容器规模
    3. 将容器组织成组,并且提供容器间的负载均衡
    4. 很容易地升级应用程序容器的新版本
    5. 提供容器弹性,如果容器失效就替换它,等等…

    Kubernetes解决的问题:
    1. 调度 - 容器应该在哪个机器上运行
    2. 生命周期和健康状况 - 容器在无错的条件下运行
    3. 服务发现 - 容器在哪,怎样与它通信
    4. 监控 - 容器是否运行正常
    5. 认证 - 谁能访问容器
    6. 容器聚合 - 如何将多个容器合并成一个工程

    四、Kubernetes环境规划

    1、环境

    [root@k8s-master ~]# cat /etc/redhat-release 
    CentOS Linux release 7.2.1511 (Core) 
    [root@k8s-master ~]# uname -r
    3.10.0-327.el7.x86_64
    [root@k8s-master ~]# systemctl status firewalld.service 
    ● firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
    [root@k8s-master ~]# getenforce 
    Disabled

    2、服务器规划

    节点及功能

    主机名

    IP

    Master、etcd、registry

    K8s-master

    10.0.0.148

    Node1

    K8s-node-1

    10.0.0.149

    Node2

    K8s-node-2

    10.0.0.150

    3、统一/etc/hosts

    echo '
    10.0.0.148    k8s-master
    10.0.0.148    etcd
    10.0.0.148    registry
    10.0.0.149    k8s-node-1
    10.0.0.150    k8s-node-2' >> /etc/hosts

    4、Kubernetes集群部署架构图

    五、Kubernetes集群部署

    1、 Master端安装

    ①部署K8s依赖etcd服务

    yum install etcd -y

    修改配置文件:vim /etc/etcd/etcd.conf

    #[Member]
    #ETCD_CORS=""
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    #ETCD_WAL_DIR=""
    #ETCD_LISTEN_PEER_URLS="http://localhost:2380"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    ETCD_NAME=master
    #ETCD_SNAPSHOT_COUNT="100000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #ETCD_QUOTA_BACKEND_BYTES="0"
    #ETCD_MAX_REQUEST_BYTES="1572864"
    #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
    #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
    #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
    #
    #[Clustering]
    #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
    ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
    #ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_DISCOVERY_SRV=""
    #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
    #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    #ETCD_INITIAL_CLUSTER_STATE="new"
    #ETCD_STRICT_RECONFIG_CHECK="true"
    #ETCD_ENABLE_V2="true"
    #
    #[Proxy]
    #ETCD_PROXY="off"
    #ETCD_PROXY_FAILURE_WAIT="5000"
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    #ETCD_PROXY_WRITE_TIMEOUT="5000"
    #ETCD_PROXY_READ_TIMEOUT="0"
    #
    #[Security]
    #ETCD_CERT_FILE=""
    #ETCD_KEY_FILE=""
    #ETCD_CLIENT_CERT_AUTH="false"
    #ETCD_TRUSTED_CA_FILE=""
    #ETCD_AUTO_TLS="false"
    #ETCD_PEER_CERT_FILE=""
    #ETCD_PEER_KEY_FILE=""
    #ETCD_PEER_CLIENT_CERT_AUTH="false"
    #ETCD_PEER_TRUSTED_CA_FILE=""
    #ETCD_PEER_AUTO_TLS="false"
    #
    #[Logging]
    #ETCD_DEBUG="false"
    #ETCD_LOG_PACKAGE_LEVELS=""
    #ETCD_LOG_OUTPUT="default"
    #
    #[Unsafe]
    #ETCD_FORCE_NEW_CLUSTER="false"
    #
    #[Version]
    #ETCD_VERSION="false"
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    #
    #[Profiling]
    #ETCD_ENABLE_PPROF="false"
    #ETCD_METRICS="basic"
    #
    #[Auth]
    #ETCD_AUTH_TOKEN="simple"

    启动并验证etcd集群状态

    [root@k8s-master ~]# systemctl start etcd.service 
    [root@k8s-master ~]# systemctl enable etcd.service 
    Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
    [root@k8s-master ~]# etcdctl -C http://etcd:4001 cluster-health
    member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
    cluster is healthy
    [root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
    member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
    cluster is healthy

    ②安装Docker

    添加yum源

    wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
    sed -i 's#download.docker.com#mirrors.ustc.edu.cn/docker-ce#g' /etc/yum.repos.d/docker-ce.repo

     安装docker

    yum install docker-ce -y

    配置Docker配置文件,使其允许从registry中拉取镜像

    [root@k8s-master ~]# vim /etc/sysconfig/docker
    # /etc/sysconfig/docker
    # Modify these options if you want to change the way the docker daemon runs
    OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
    if [ -z "${DOCKER_CERT_PATH}" ]; then
        DOCKER_CERT_PATH=/etc/docker
    fi
    OPTIONS='--insecure-registry registry:5000'

    启动docker并设置成开机启动

    [root@k8s-master ~]# systemctl start docker.service
    [root@k8s-master ~]# systemctl enable docker.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

    ③安装kubernets

    yum install kubernetes -y
    #安装k8s的时候会自动安装docker,如果先安装docker版本冲突#
    yum list installed | grep docker
    yum remove -y docker-ce.x86_64
    yum install kubernetes -y

    ④配置配置并启动kubernetes

    修改/etc/kubernetes/apiserver配置文件

    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--port=8080"
    
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    # Add your own!
    KUBE_API_ARGS=""
    修改/etc/kubernetes/config配置文件
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://k8s-master:8080"

    启动K8S相关服务并设置开机启动

    [root@k8s-master ~]# systemctl start kube-controller-manager.service
    [root@k8s-master ~]# systemctl enable kube-controller-manager.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    [root@k8s-master ~]# systemctl start kube-scheduler.service
    [root@k8s-master ~]# systemctl enable kube-scheduler.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    2、Node节点部署
    ①安装kubernets、docker
    yum install kubernetes -y

    启动docker并设置开机启动

    [root@k8s-node-1 ~]# systemctl start docker.service 
    [root@k8s-node-1 ~]# systemctl enable docker.service 
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

    ②配置并启动kubernetes

    修改/etc/kubernetes/config配置文件

    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://k8s-master:8080"

    修改/etc/kubernetes/kubelet

    ###
    # kubernetes kubelet (minion) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    
    # Add your own!
    KUBELET_ARGS=""

     启动node节点服务并配置开机启动

    systemctl start kubelet.service
    systemctl enable kubelet.service
    systemctl start kube-proxy.service
    systemctl enable kube-proxy.service

    3、在master上查看集群中节点及节点状态

    [root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
    NAME         STATUS    AGE
    k8s-node-1   Ready     1m
    k8s-node-2   Ready     1m
    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS    AGE
    k8s-node-1   Ready     3m
    k8s-node-2   Ready     3m

    4、创建覆盖网络——Flannel

    ①在master、node上均执行如下命令,进行安装

    yum install flannel -y
    master、node上配置Flannel,
    vi /etc/sysconfig/flanneld
    
    # Flanneld configuration options  
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    ②设置etcd网络
    etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'

    启动Flannel,并重启docker、kubernets服务

    Master:
    systemctl start flanneld.service 
    systemctl enable flanneld.service 
    service docker restart
    systemctl restart kube-apiserver.service
    systemctl restart kube-controller-manager.service
    systemctl restart kube-scheduler.service
    Node:
    systemctl start flanneld.service 
    systemctl enable flanneld.service 
    service docker restart
    systemctl restart kubelet.service
    systemctl restart kube-proxy.service
     
    成功最有效的方法就是向有经验的人学习!
  • 相关阅读:
    iOS sqlite3
    NSObject常用方法
    驱动项目设置中混淆点小记
    globalsign代码签名最新步骤
    Web学习资源及手册查询整理
    H5基于iScroll实现下拉刷新,上拉加载更多
    移动端meta标签
    一、开发过程中遇到的js知识点总结(1)
    vue API 知识点(4) --- 指令、特殊 attribute 、内置组件
    vue API 知识点(3) --- 实例 总结
  • 原文地址:https://www.cnblogs.com/yanxinjiang/p/9448046.html
Copyright © 2020-2023  润新知