• day01 K8S


    1.Docker容器化封装应用的意义

    docker引擎统一了基础设施环境-docker环境

    • 硬件的配置
    • 操作系统的版本,只要docker ce版本一致就可以
    • 运行时环境的异构,

    docker引擎统一了程序打包装箱的方式-docker镜像

    • java程序
    • python程序
    • nodejs程序

    docker引擎统一了程序部署的方式-docker容器

    • java -jar ... -> docker run
    • python manage.py runserver -> docker run
    • npm run dev -> docker run ...

    2.docker容器化封装应用的缺点

    • 单机使用,无法有效集群
    • 随着容器数量上升,管理成本攀升
    • 没有有效的容灾/自愈机制
    • 没有预设编排模板,无法实现快速,大规模容器调度
    • 没有同意的配置管理中心工具
    • 没有容器生命周期的管理工具(stop后做什么,杀了后做什么)
    • 没有图形化运维管理工具

    3.我们需要容器编排工具

    • docker compose单机,docker swarm
    • mesosphere +marathon (2015年特别火)
    • kubernetes(K8S)

    4.k8s学习目录

    image

    5.k8s概述

    image

    google在2014年开源,源于borg系统,后go语言重写捐献cncf(云原生)基金会
    官网:kubernetes.io
    image
    以官网为主,并且多看github:https://github.com/kubernetes/
    image
    找release版本,1年4个大版本

    https://kubernetes.io/zh/docs/concepts/overview/what-is-kubernetes/

    本处使用版本1.15版本,不使用1.16版本,1.16版本变化较大,废弃了一堆api
    k8s:含义就是把集装箱运转到生产一线。生态丰富,社区活跃

    6.k8s的优势

    • 自动装箱,水平扩展,自我修复
    • 服务发现和负载均衡
    • 自动发布(默认滚动发布模式)和回滚
    • 集中化配置管理和密钥管理(有配置中心的概念)
    • 存储编排(支持外挂存储,并支持外挂存储的编排)
    • 任务批处理运行

    7.K8S快速入门

    四组基本概念

    • pod/pod控制器
    • Name/namespace
    • Label/label选择器
    • Service/Ingress

    8.Pod

    pod是k8s里能够运行的最小逻辑单元
    1个pod里面可以运行多个容器,1个p里面可以有多个c,他们共享UTS+NET+IPC名称空间的
    可以把pod理解为豌豆荚,而同一pod内的每个容器都是一颗颗豌豆
    一个pod里面运行多个容器,叫做边车模式(sideCar,胯子)

    9.Pod控制器

    pod控制器是pod启动的一种模板,用来保证k8s里启动的pod始终按照人们的预期运行(副本数,生命周期,监控状态检查)
    内置常见pod控制器

    • Deployment部署(用的最多)
    • DaemonSet每个运算几点都起一份
    • ReplicaSet
    • StatefulSet管理有状态应用的pod控制器
    • Job关机计划任务的
    • Cronjob

    10.Name

    k8s内部,使用资源来定义每一种逻辑功能,顾每种"资源"都应该有自己的名称
    资源有api版本(apiVersion),类别(kind),元数据(metadata),定义清单(spec),状态(status)等配置信息
    名称通常定义在"资源"的"元数据"信息里

    11.Namespace

    项目增多,人员增加,集群规模扩大,需要隔离k8s内部资源的方法,就是名称空间
    帮我们做一定程度的隔离,但又没有特别明显的限制。
    (互联网公司,有多个研发部,用不同的名称空间发布不同研发部的项目)
    名称空间可以理解为k8s内部的虚拟集群组
    不同名称空间内的资源,名称可以相同,相同名称空间内的同种资源,名称不能相同
    合理使用k8s的名称空间,集群管理员能更好的对交付到k8s里的服务进行分类管理和浏览
    k8s里面默认的名称空间有:default,kube-system,kube-public
    查询k8s里面特定"资源",需要带上相应的名称空间

    12.Label

    标签是k8s特色管理方式,便于分类管理资源对象
    一个标签可以对应多个资源,一个资源也可以有多个标签,多对多关系
    一个资源拥有多个标签,可以实现不同维度的管理。
    标签的组成:key=value(value值更严格,不能多于64字节)
    于标签类似的,一种"注解"(annotations)
    image

    13.Label选择器

    给资源打上标签后,可以使用标签选择器过滤指定的标签
    给标签选择器目前有2个:基于等值关系(等于,不等于)和基于集合关系(属于,不属于,存在)
    许多资源支持内嵌标签选择器字段
    许多资源支持内嵌标签选择器字段

    • matchLabels
    • matchExpressions

    14.Service(暴露4层接口)

    每个pod都会被分配一个单独的ip地址,这个ip地址会随着pod的销毁而消失,每次变更,ip地址都会随之而变
    service就是用来解决这个问题的核心概念
    一个service可以看做一组提供相同服务的pod的对外访问接口,headless类型的service可能都没有ip地址
    service作用于哪些pod是通过标签选择器来定义的

    15.Ingress(暴露7层接口,识别url,进行流量)

    ingress是k8s集群里工作在osi网络模型,第七层的应用,对外暴露的接口
    service只能进行L4流量调度,表现形式ip+port
    ingress则可以调度不同业务域,不同url访问路径的业务流量

    16.K8S集群里的3套网络

    Node网络:节点网络
    Pod网络:容器网络
    service网络:集群cluster网络

    如何通过service网络找到pod网络呢,通过核心组件qhroxy

    17.K8S的几个组件大类

    核心组件

    • 配置存储中心->etcd服务(像zk,有自己的高可用机制,可以理解为数据库)
    • 主控节点(master)节点
      • kube-apiserver服务
        提供了集群管理rest api接口(鉴权,数据校验及集群状态变更)
        负责其他模块之间的数据交互,承担通信枢纽功能
        资源配额控制的入口
        提供完备的集群安全机制

      • kube-controller-manager服务
        控制器管理器,通过apiserver监控整个集群的状态,确保集群处于预期的工作状态
        Node controller,Deployment Controller,Service Controller,Volumn Controller,Endpoint Controller,Garbage Controller,Namespace Controller,Job Controller,Resource Quta Controller

      • kube-scheduler服务
        接受调度pod到适合的运算节点上.
        预选策略(predict)
        优选策略(priorities)

    • 运算节点
      • kube-kubelet服务
        定时从某个地方获取节点上pod的期望状态(运行什么容器,运行的副本数量,网络或者存储如何配置),并调用对应的容器平台docker接口达到这个状态
        定时汇报当前节点的状态给apiserver,apiserver写到etcd里,以供调度的时候使用
        镜像和容器的清理工作,保证节点上镜像不会占满磁盘空间,退出的容器不会占用太多资源
      • kube-proxy服务
        运算节点上运行的网络代理,service资源的载体
        建立了pod网络和集群网络的关系(clusterip->podip),最重要的东西
        常见三种流量调度模式
        Userspace(废弃)
        Iptables(濒临废弃)
        Ipvs(推荐),不走nat filter,本机没有那么多iptables,相当于k8s里内嵌了一套lvs
        负责建立和删除包括更新调度规则,通知apiserver自己的更新,或者从apiserver哪里获取其他kube-proxy的调度规则变化来更新自己

    CLI客户端

    • kubectl

    核心附件

    • CNI网络插件 -> flannel/calico
    • 服务发现用插件 -> coredns
    • 服务暴露用插件 -> traefik
    • GUI管理插件-> Dashboard

    image
    pod网络172.7.21.5 就可以直接判断出来在10.4.7.21的宿主机上
    10.4.7.21 ip端拆解
    4:代表机房,可能是亦庄同济机房,5可能就是酒仙桥大白楼
    7:代表不同的项目,金融事业部的测试环境,8代表金融事业部的生产环境

    172.7.21.5的容器ip
    7.21代表的是对应到7.21的宿主机上啊,肉眼可见的关系映射

    18.k8s逻辑架构

    image

    建议2个主控节点起步,4个9的稳定性

    19.部署架构

    image
    10.4.7.200运维主机

    • docker的私有仓库
    • k8s资源配置清单仓库
    • 提供共享存储(NFS)
    • 签发证书

    10.4.7.21
    etcd要基数个,要投票机制(10.4.7.12,10.4.7.21,10.4.7.22)

    10.4.7.10 是vip

    L4层负载保证了apiserver是高可用的
    L7层负载是ingress要用的

    20.常见的k8s的安装部署方式

    • Minikube 单节点微型K8S(仅供学习,预览使用)
    • 二进制安装部署(生产首选,新手推荐)
      难,需要1整天
    • kubeadmin进行部署,k8s的部署工具,跑在K8S里(相对简单,熟手推荐)

    21.安装部署准备工作

    • 准备5台2c/2g/50g虚机,使用10.4.7.0/24网络
    • 预装Centos7.6操作系统,做好基础优化
    • 安装部署bind9,部署自建DNS系统
    • 准备自签证书环境
    • 安装部署Docker环境,部署Harbor私有仓库

    1.修改主机名
    image
    2.修改网卡配置文件
    image

    22.K8S前置工作,bind9的安装部署

    所有机器都操作
    关闭防火墙systemctl stop firewalld
    安装epel源yum install epel-release -y
    安装必要工具yum install wget net-tools telent tree nmap sysstat lrzsz dos2unix bind-utils -y

    用ingress的7层流量调度,必须要有域名,自建dns,容器服从dns的解析记录
    bind9是linux上开源的最好用的dns软件
    jdss7-11机器上执行yum install bind -y
    image

    22.1 配置主配置文件/etc/named.conf

    // 将监听端口改为自己的本机端口
    listen-on port 53 { 10.4.7.11; };
    // 将允许查询的域调为any任意
    allow-query     { any; }
    // 指定上级dns地址
    forwarders { 172.31.4.1; }
    
    // 采用递归算法提供dns查询
    recursion yes;
    
    // 关闭dnssec,节省资源
    dnssec-enable no;
    dnssec-validation no;
    
    

    image

    named-checkconf命令检查配置文件

    22.2 配置区域配置文件

    规划了2个域,一个是主机域host.com,还有一个是业务域od.com
    主机名=地域+ip端后2位(10.4.7.21)
    YZSJHL7-21.host.com
    亦庄世纪互联(10.4)

    追加 /etc/named.rfc1912.zones

    zone "host.com" IN {
    	type master;
    	file "host.com.zone";
    	allow-update { 10.4.7.11; };
    };
    
    zone "od.com" IN {
    	type master;
    	file "od.com.zone";
    	allow-update { 10.4.7.11; };
    };
    

    22.3 编辑区域数据文件

    /var/named/host.com.zone

    $ORIGIN host.com.
    $TTL 600        ;10 minutes
    @	IN  SOA   dns.host.com.  dnsadmin.host.com. (
    				2021120201    ; serial
    				10800	    ; refresh (3 hours)
    				900         ; retry (15 minutes)
    				604800      ; expire (1 week)
    				86400       ; minimum (1 day)
    				)
    			NS	dns.host.com.
    $TTL 60 ; 1 minute
    dns		A	10.4.7.11
    JDSS7-11	A	10.4.7.11
    JDSS7-12	A	10.4.7.12
    JDSS7-21	A	10.4.7.21
    JDSS7-22	A	10.4.7.22
    JDSS7-200	A	10.4.7.200
    

    /var/named/od.com.zone

    $ORIGIN od.com.
    $TTL 600        ;10 minutes
    @	IN  SOA   dns.od.com.  dnsadmin.od.com. (
    				2021120201    ; serial
    				10800	    ; refresh (3 hours)
    				900         ; retry (15 minutes)
    				604800      ; expire (1 week)
    				86400       ; minimum (1 day)
    				)
    			NS	dns.od.com.
    $TTL 60 ; 1 minute
    dns		A	10.4.7.11
    

    再次检查named-checkconf

    22.4 开启bind

    systemctl start named
    netstat -luntp | grep 53
    可以发现开启了,验证一下

    dig -t A jdss7-21.host.com @10.4.7.11 +short
    

    image

    修改网卡配置文件/etc/sysconfig/network-script/ifcfg-enp0s3

    DEVICE=enp0s3
    TYPE=Ethernet
    ONBOOT=yes
    NM_CONTROLLED=yes
    BOOTPROTO=static
    HWADDR=08:00:27:42:a9:bc
    IPADDR=10.4.7.11
    NETMASK=255.255.255.0
    GATEWAY=10.4.7.2
    DNS1=10.4.7.11
    

    重启网卡服务systemctl restart network
    可以发现开始走本地的dns服务了

    22.5 配置dns客户端

    linux主机上
    /etc/resolv.conf

    // 短域名
    search host.com
    

    ping jdss7-21,主机域可以用短域名,业务域必须用全名称

    22.6 主机域

    host.com 主机域,假域,要与业务没关系

    23.准备证书的签发环境

    k8s里面有一堆证书需要,互相通信

    使用CFSSL去签发证书
    放到jdss7-200这台机器上

    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
    chmod +x /usr/bin/cfssl*
    

    mkdir -p /opt/certs
    创建根证书CA证书
    (1)创建ca证书的请求文件
    /opt/certs/ca-csr.json

    {
    	"CN":"OldboyEdu",
    	"host":[],
    	"key":{
    		"algo":"rsa",
    		"size":2048
    	},
    	"names":[
    		{
    			"C":"CN",
    			"ST":"beijing",
    			"L":"beijing",
    			"O":"od",
    			"OU":"ops"
    		}
    	],
    	"ca":{
    		"expiry":"175200h"
    	}
    }
    

    CN:common name,浏览器使用该字段验证网站是否合法,一般写的是域名,非常重要,浏览器使用此字段验证网站是否合法
    C:country 国家
    ST: state ,州
    L:locality 地区
    O:organization name,组织名称,公司名称
    OU:organization unit name,组织单位名称,公司部门

    签证书,生成4个文件
    ca.pem 根证书和ca-key.pem根证书私钥
    image

    24.docker环境安装

    jdss7-200,jdss7-21,jdss7-22 这3台机器上进行安装

    curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
    

    image

    修改docker配置文件
    jdss7-21机器,/etc/docker/daemon.json,并创建目录/data/docker

    {
        "graph":"/data/docker",
        "storage-driver":"overlay2",
        "insecure-registries":[
            "registry.access.redhat.com",
            "quay.io",
            "harbor.od.com"
        ],
        "registry-mirrors": ["http://f1361db2.m.daocloud.io","https://q2qr04ke.mirror.aliyuncs.com"],
        "bip":"172.7.21.1/24",
        "exec-opts":[
            "native.cgroupdriver=systemd"
        ],
        "live-restore":true
    }
    

    jdss7-22机器,/etc/docker/daemon.json,并创建目录/data/docker

    {
        "graph":"/data/docker",
        "storage-driver":"overlay2",
        "insecure-registries":[
            "registry.access.redhat.com",
            "quay.io",
    	"harbor.od.com"
        ],
        "registry-mirrors": ["http://f1361db2.m.daocloud.io","https://q2qr04ke.mirror.aliyuncs.com"],
        "bip":"172.7.22.1/24",
        "exec-opts":[
            "native.cgroupdriver=systemd"
        ],
        "live-restore":true
    }
    

    jdss7-21,jdss7-22这2台机器上重启docker服务

    systemctl start docker
    

    25.私有仓库harbor搭建

    1.7.5 以前有漏洞,选1.7.6及以上版本

    github上harbor的release版本
    选v1.8.3的offline版本

    https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.3.tgz
    

    25.1 在jdss7-200的机器上/opt/src目录

    wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.3.tgz
    tar xf harbor-offline-installer-v1.8.3.tgz -C /opt/
    cd /opt
    mv harbor harbor-v1.8.3
    ln -s /opt/harbor-v1.8.3 /opt/harbor
    cd /opt/harbor
    
    
    

    25.2 编辑harbor的配置文件

    vim /opt/harbor/harbor.yml
    (od.com是业务域)

    hostname : harbor.od.com
    http:
        port: 180
    harbor_admin_password:	Harbor12345
    log:
    	level: info
    	rotate_count: 50 // 滚动的数量
    	rotate_size: 200M // 每满200MB滚动一下
    	location: /data/harbor/logs
    data_volume: /data/harbor
    external_url: http://harbor.od.com:80 // 为了加了代理后验证不通过的扩展配置,刚开始配置可不加,配完nginx后代理他,需要加上
    

    特别说明

    修改harbor.yml配置文件,取消external_url注释,设置为:
    
    external_url: http://harbor.od.com:80
    然后,docker-compose down停止所有服务,删除当前配置目录:rm -rf ./common/config下配置清单,重新执行install.sh生成配置,即可解决
    
    配置大概解释:如果使用外部代理就要启动该项(harbor前挂了个nginx就需要开启这项)
    

    完整配置

    # Configuration file of Harbor
    
    # The IP address or hostname to access admin UI and registry service.
    # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
    hostname: harbor.od.com
    
    # http related config
    http:
      # port for http, default is 80. If https enabled, this port will redirect to https port
      port: 180
    
    # https related config
    # https:
    #   # https port for harbor, default is 443
    #   port: 443
    #   # The path of cert and key files for nginx
    #   certificate: /your/certificate/path
    #   private_key: /your/private/key/path
    
    # Uncomment external_url if you want to enable external proxy
    # And when it enabled the hostname will no longer used
    external_url: http://harbor.od.com:80
    
    # The initial password of Harbor admin
    # It only works in first time to install harbor
    # Remember Change the admin password from UI after launching Harbor.
    harbor_admin_password: Harbor12345
    
    # Harbor DB configuration
    database:
      # The password for the root user of Harbor DB. Change this before any production use.
      password: root123
    
    # The default data volume
    data_volume: /data/harbor
    
    # Harbor Storage settings by default is using /data dir on local filesystem
    # Uncomment storage_service setting If you want to using external storage
    # storage_service:
    #   # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
    #   # of registry's and chart repository's containers.  This is usually needed when the user hosts a internal storage with self signed certificate.
    #   ca_bundle:
    
    #   # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
    #   # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
    #   filesystem:
    #     maxthreads: 100
    #   # set disable to true when you want to disable registry redirect
    #   redirect:
    #     disabled: false
    
    # Clair configuration
    clair: 
      # The interval of clair updaters, the unit is hour, set to 0 to disable the updaters.
      updaters_interval: 12
    
      # Config http proxy for Clair, e.g. http://my.proxy.com:3128
      # Clair doesn't need to connect to harbor internal components via http proxy.
      http_proxy:
      https_proxy:
      no_proxy: 127.0.0.1,localhost,core,registry
    
    jobservice:
      # Maximum number of job workers in job service  
      max_job_workers: 10
    
    chart:
      # Change the value of absolute_url to enabled can enable absolute url in chart
      absolute_url: disabled
    
    # Log configurations
    log:
      # options are debug, info, warning, error, fatal
      level: info
      # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
      rotate_count: 50
      # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes. 
      # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G 
      # are all valid.
      rotate_size: 200M
      # The directory on your host that store log
      location: /data/harbor/logs
    
    #This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
    _version: 1.8.0
    
    # Uncomment external_database if using external database.
    # external_database:
    #   harbor:
    #     host: harbor_db_host
    #     port: harbor_db_port
    #     db_name: harbor_db_name
    #     username: harbor_db_username
    #     password: harbor_db_password
    #     ssl_mode: disable
    #   clair:
    #     host: clair_db_host
    #     port: clair_db_port
    #     db_name: clair_db_name
    #     username: clair_db_username
    #     password: clair_db_password
    #     ssl_mode: disable
    #   notary_signer:
    #     host: notary_signer_db_host
    #     port: notary_signer_db_port
    #     db_name: notary_signer_db_name
    #     username: notary_signer_db_username
    #     password: notary_signer_db_password
    #     ssl_mode: disable
    #   notary_server:
    #     host: notary_server_db_host
    #     port: notary_server_db_port
    #     db_name: notary_server_db_name
    #     username: notary_server_db_username
    #     password: notary_server_db_password
    #     ssl_mode: disable
    
    # Uncomment external_redis if using external Redis server
    # external_redis:
    #   host: redis
    #   port: 6379
    #   password:
    #   # db_index 0 is for core, it's unchangeable
    #   registry_db_index: 1
    #   jobservice_db_index: 2
    #   chartmuseum_db_index: 3
    
    # Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
    # uaa:
    #   ca_file: /path/to/ca
    

    25.3 harbor 依赖docker跑起来的

    yum install docker-compose -y
    image

    25.4 安装harbor

    cd /opt/harbor
    ./install.sh
    image
    image

    docker-compose ps可以看到起了一堆容器
    image

    25.5 安装nginx

    yum install nginx -y
    修改配置文件
    vim /etc/nginx/conf.d/harbor.od.com.conf

    server {
            listen  80;
            server_name harbor.od.com;
    
    		// harbor 每层镜像大小不一样
            client_max_body_size 1000m;
    
            location / {
                    proxy_pass http://127.0.0.1:180;
            }
    }
    

    nginx -t
    systemctl enable nginx
    systemctl start nginx

    25.6 dns服务器解析harbor.od.com

    jdss7-11机器上去解析这个域名,serial要前滚一个信号
    vim /var/named/od.com.zone
    image
    systemctl restart named
    [root@jdss7-11 ~]# dig -t A harbor.od.com +short
    10.4.7.200

    jdss7-200机器上执行curl harbor.od.com
    image

    25.7 浏览器访问harbor.od.com

    image
    新建项目
    image

    jdss7-200上下载一个镜像nginx:v1.7.9
    docker pull docker.io/library/nginx:1.7.9
    docker pull nginx:1.7.9
    image
    给从公网下的镜像打一个tag为harbor.od.com/public/nginx:v1.7.9
    docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9

    尝试把此镜像推送到仓库harbor.od.com上去
    image
    发现报错了,我们就docker login一下,
    docker login harbor.od.com

    重新推送docker push harbor.od.com/public/nginx:v1.7.9
    image

    26.安装master节点服务-etcd集群

    jdss7-12,jdss7-21,jdss7-22总计3台机器

    cd /opt/
    mkdir -p src
    cd src
    
    

    26.1 证书服务器上7-200上生成etcd之间依赖的证书文件

    在根证书机器上,创建基于根证书的config配置文件
    /opt/certs/ca-config.json

    {
        "signing":{
            "default":{
                "expiry":"175200h"
            },
            "profiles":{
                "server":{
                    "expiry":"175200h",
                    "usages":[
                        "signing",
                        "key encipherment",
                        "server auth"
                    ]
                },
                "client":{
                    "expiry":"175200h",
                    "usages":[
                        "signing",
                        "key encipherment",
                        "client auth"
                    ]
                },
                "peer":{
                    "expiry":"175200h",
                    "usages":[
                        "signing",
                        "key encipherment",
                        "server auth",
                        "client auth"
                    ]
                }
            }
        }
    }
    

    创建etcd-peer-csr.json

    {
        "CN":"k8s-etcd",
        "hosts":[
            "10.4.7.11",
            "10.4.7.12",
            "10.4.7.21",
            "10.4.7.22"
        ],
        "key":{
            "algo":"rsa",
            "size":2048
        },
        "names":[
            {
                "C":"CN",
                "ST":"beijing",
                "L":"beijing",
                "O":"od",
                "OU":"ops"
            }
        ]
    }
    

    签发证书

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json
    

    image

     cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
    

    可以看到有证书文件生成
    image

    26.2 安装etcd

    jdss7-12机器上创建用户

    [root@jdss7-12 ~]# useradd -s /sbin/nologin -M etcd
    Creating mailbox file: File exists
    [root@jdss7-12 ~]# id etcd
    uid=1001(etcd) gid=1001(etcd) groups=1001(etcd)
    

    jdss7-12下载etcd的软件3.1.20版本
    /opt/src

    tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
    cd /opt/
    mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
    ln -s etcd-v3.1.20 etcd
     mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
    
    

    etcd需要3个证书,ca.pem,etcd-peer-key.pem(私钥),etcd-peer.pem
    image

    创建etcd的启动脚本
    /opt/etcd/etcd-server-startup.sh

    #!/bin/sh
    ./etcd  --name etcd-server-7-12 \
            --data-dir /data/etcd/etcd-server \
            --listen-peer-urls https://10.4.7.12:2380 \
            --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
            --quota-backend-bytes 8000000000 \
            --initial-advertise-peer-urls https://10.4.7.12:2380 \
            --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
            --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:23
    80,etcd-server-7-22=https://10.4.7.22:2380 \
            --ca-file ./certs/ca.pem \
            --cert-file ./certs/etcd-peer.pem \
            --key-file ./certs/etcd-peer-key.pem \
            --client-cert-auth \
            --trusted-ca-file ./certs/ca.pem \
            --peer-ca-file ./certs/ca.pem \
            --peer-cert-file ./certs/etcd-peer.pem \
            --peer-key-file ./certs/etcd-peer-key.pem \
            --peer-client-cert-auth \
            --peer-trusted-ca-file ./certs/ca.pem \
            --log-output stdout
    

    chmod +x etcd-server-startup.sh
    chown -R etcd.etcd /opt/etcd-v3.1.20/
    chown -R etcd.etcd /data/etcd/
    chown -R etcd.etcd /data/logs/etcd-server

    然后依赖supervisor去加载startup.sh
    yum install supervisor -y
    systemctl start supervisord
    systemctl enable supervisord

    supervisord去管理etcd
    /etc/supervisor.d/etcd-server.ini

    [program:etcd-server-7-12]
    command=/opt/etcd/etcd-server-startup.sh	;
    numprocs=1					;
    directory=/opt/etcd				;
    autostart=true					;
    autorestart=true				;
    startsecs=30					;
    startretries=3					;
    exitcodes=0,2					;
    stopsignal=QUIT					;
    stopwaitsecs=10					;
    user=etcd					;
    redirect_stderr=true				;
    stdout_logfile=/data/logs/etcd-server/etcd.stdout.log	;
    stdout_logfile_maxbytes=64MB			;
    stdout_logfile_backup=4				;
    stdout_capture_maxbytes=1MB			;
    stdout_events_enabled=false			;
    

    supervisorctl update
    image
    image

    检测etcd集群的健康状态

    ./etcdctl cluster-health
    ./etcdctl member list
    

    27.安装apiserver

    jdss7-21,jdss7-22这2台机器上安装
    k8s的包,v1.15.2
    image

    image

    https://storage.googleapis.com/kubernetes-release/release/v1.15.2/kubernetes-server-linux-amd64.tar.gz
    安装
    jdss7-21,jdss7-22机器上

    cd /opt/src
    wget https://storage.googleapis.com/kubernetes-release/release/v1.15.2/kubernetes-server-linux-amd64.tar.gz
    tar xf kubernetes-server-linux-amd64.tar.gz -C /opt/
    cd /opt
     mv kubernetes/ kubernetes-v1.15.2
     ln -s kubernetes-v1.15.2/ kubernates
     cd kubernates/server/bin
    

    image
    里面结尾是tar的是docker镜像,用不到rm -f *.tar;rm -f *_tag
    我们就是要部署kube-apiserver

    27.1 签发apiserver去连etcd的client证书

    这个是apiserver和etcd要用的证书
    apiserver是客户端,etcd是服务端
    在jdss7-200机器上,创建签名csr的json配置
    /opt/certs/client-csr.json

    {
        "CN":"k8s-node",
        "hosts":[
        ],
        "key":{
            "algo":"rsa",
            "size":2048
        },
        "names":[
            {
                "C":"CN",
                "ST":"beijing",
                "L":"beijing",
                "O":"od",
                "OU":"ops"
            }
        ]
    }
    
    

    生成apiserver去连etcd用的证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client

    27.2 生成别的节点去连apiserver的证书

    jdss7-200机器上执行
    vim apiserver-csr.json

    
    可点击key和value值进行编辑
    {
        "CN":"k8s-apiserver",
        "hosts":[
            "127.0.0.1",
            "192.168.0.1",
            "kubernetes.default",
            "kubernetes.default.svc",
            "kubernetes.default.svc.cluster",
            "kubernetes.default.svc.cluster.local",
            "10.4.7.10",
            "10.4.7.21",
            "10.4.7.22",
            "10.4.7.23"
        ],
        "key":{
            "algo":"rsa",
            "size":2048
        },
        "names":[
            {
                "C":"CN",
                "ST":"beijing",
                "L":"beijing",
                "O":"od",
                "OU":"ops"
            }
        ]
    }
    

    生成别的节点连接apiserver的证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver
    image

    27.3 apiserver的配置-拷贝证书

    jdss7-21机器执行

    mkdir -p /opt/kubernates/server/bin/cert
    

    copy过来几个证书ca.pem,ca-key.pem,client.pem,client-key.pem,apiserver.pem,apiserver-key.pem

    27.4 apiserver的配置-创建启动配置文件

    mkdir -p /opt/kubernates/server/bin/conf
    

    vim /opt/kubernates/server/bin/conf/audit.yaml
    给k8s做日志审计用的

    apiVersion: audit.k8s.io/v1beta1 # This is required.
    kind: Policy
    # Don't generate audit events for all requests in RequestReceived stage.
    omitStages:
      - "RequestReceived"
    rules:
      # Log pod changes at RequestResponse level
      - level: RequestResponse
        resources:
        - group : ""
            # Resource "pods" doesn't match requests to any subresource of pods
            # which is consistent with th RBAC policy.
          resource: ["pods"]
      # Log "pods/log","pods/status" at Metadata level
      - level: Metadata
        resources:
        - group: ""
          resource: ["pods/log","pods/status"]
      # Don't log requests to a configmap called "controller-leader"
      - level: None
        resources:
        - group: ""
          resources: ["configmaps"]
          resourceNames: ["controller-leader"]
      # Don't log watch requests by the "system:kube-proxy" on endpoints or services
      - level: None
        users: ["system:kube-proxy"]
        verbs: ["watch"]
        resources:
        - group: "" # core API group
          resources: ["endpoints","services"]
      # Don't log authenticated requests to certain non-resource UML paths
      - level: None
        userGroups: ["system:authenticated"]
        nonResourceURLs:
        - "/api*" # Wildcard matching
        - "/version"
      # Log the request body of configmap changes in kube-system.
      - level: Request
        resources:
        - group: "" # core API group
          resources: ["configmaps"]
        namespaces: ["kube-system"]
      # log configmap secret changes in all other namespaces at the Metadata level
      - level: Metadata
        resources:
        - group: "" # core API group
          resources: ["secrets","configmaps"]
      # log log all other resources in core and extensions at the Request level
      - level: Request
        resources:
        - group: "" # core API group
        - group: "extensions" # Version of group should NOT be included
      # A catch-all rule to log all other requests at the Metadata level
      - level: Metadata
        omitStages:
           - "RequestReceived"
    

    编写kube-apiserver.sh

    #!/bin/bash
    ./kube-apiserver \
      --apiserver-count 2 \
      --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
      --audit-policy-file ./conf/audit.yaml \
      --authorization-mode RBAC \
      --client-ca-file ./cert/ca.pem \
      --requestheader-client-ca-file ./cert/ca.pem \
      --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
      --etcd-cafile ./cert/ca.pem \
      --etcd-certfile ./cert/client.pem \
      --etcd-keyfile ./cert/client-key.pem \
      --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
      --service-account-key-file ./cert/ca-key.pem \
      --service-cluster-ip-range 192.168.0.0/16 \
      --service-node-port-range 3000-29999 \
      --target-ram-mb=1024 \
      --kubelet-client-certificate ./cert/client.pem \
      --kubelet-client-key ./cert/client-key.pem \
      --log-dir /data/logs/kubernetes/kube-apiserver \
      --tls-cert-file ./cert/apiserver.pem \
      --tls-private-key-file ./cert/apiserver-key.pem \
      --v 2
    

    chmod + x kube-apiserver.sh

    编写supervisor支持apiserver自动拉起配置
    vim /etc/supervisord.d/kube-apiserver.init

    [program:kube-apiserver-7-21]
    command=/opt/kubernetes/server/bin/kube-apiserver.sh 	;
    numprocs=1						;
    directory=/opt/kubernetes/server/bin			;
    autostart=true						;
    autorestart=true					;
    startsecs=30						;
    startretries=3						;
    exitcodes=0,2						;
    stopsignal=QUIT						;
    stopwaitsecs=10						;
    user=root						;
    redirect_stderr=true					;
    stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ;
    stdout_logfile_maxbytes=64MB				;
    stdout_logfile_backups=4				;	
    stdout_capture_maxbytes=1MB				;
    stdout_events_enabled=false				;
    

    创建目录mkdir -p /data/logs/kubernetes/kube-apiserver

    supervisorctl update
    supervisorctl status ,查看是否起来了,应用进程

    kube-apiserver启动后会监听8080端口及6443端口

    28.安装L4层负载


    vip:10.4.7.10

    10.4.7.10:7443端口反代理2台apiserver机器上个的6443端口

    28.1 jdss7-11,jdss7-12上安装nginx

    yum install nginx -y
    yum install nginx-all-modules -y
    

    配置4层反向代理
    vim /etc/nginx/conf/nginx.conf
    stream块是做4层负载的

    stream {
            upstream kube-apiserver {
                    server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
                    server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
            }
    
            server {
                    listen 7443;
                    proxy_connect_timeout 2s;
                    proxy_timeout 900s;
                    proxy_pass kube-apiserver;
            }
    }
    

    nginx -t 检查配置文件是否正确
    启动nginx

    systemctl start nginx
    

    开启开机自启nginx

    systemctl enable nginx
    

    28.2 安装keepalived,然后使用vrrp协议生成vip而飘起来

    jdss7-11及jdss7-12上使用keepalived

    yum install keepalived -y
    

    编写keepliaved监听端口的脚本/etc/keepliaved/check_port.sh

    #!/bin/bash
    # keepalived 监控端口脚本
    # 使用方法
    # 在keepalived配置文件中
    # vrrp_script check_port {
    #       script "/etc/keepalived/check_port.sh 6379"
    #       interval 2 #检查脚本的频率,单位(秒)
    #}
    CHK_PORT=$1
    if [[ -n "$CHK_PORT" ]];then
            PORT_PROCESS=`ss -tln | grep $CHK_PORT | wc -l`
            if [[ $PORT_PROCESS -eq 0 ]];then
                    echo "Port $CHK_PORT Is Not Used,End."
                    exit 1
            fi
    else
            echo "Check Port Cant Be Empty"
    fi
    

    编写keepliaved的配置文件

    • jdss7-11上是keepalived主
      /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    
    global_defs {
       router_id 10.4.7.11
    }
    
    vrrp_script chk_nginx {
        script "/etc/keepalived/check_port.sh 7443"
        interval 2
        weight -20
    }
    vrrp_instance VI_1 {
        state MASTER
        interface enp0s3
        virtual_router_id 251
        priority 100
        advert_int 1 // 通告发送间隔,包含主机优先级、心跳等
        mcast_src_ip 10.4.7.11
        nopreempt // 非抢占式
    
        authentication {
            auth_type PASS
            auth_pass 11111111
        }
        track_script {  // 配置按照脚本检测逻辑去判断高可用,vip的漂浮,不配置的话默认走ip的ping检测
            chk_nginx
        }
        virtual_ipaddress {
            10.4.7.10
        }
    }
    
    • jdss7-12机器上当做keepliaved的从
      /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    
    global_defs {
       router_id 10.4.7.12
    }
    
    vrrp_script chk_nginx {
        script "/etc/keepalived/check_port.sh 7443"
        interval 2
        weight -20
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface enp0s3
        virtual_router_id 251
        priority 90
        advert_int 1
        mcast_src_ip 10.4.7.12
    
        authentication {
            auth_type PASS
            auth_pass 11111111
        }
        track_script {
            chk_nginx
        }
        virtual_ipaddress {
            10.4.7.10
        }
    }
    
    • 开启keepalived
      systemctl start keepalived
      systemctl enable keepalived

    29.部署主控节点的controller-manager(不用签证书)

    jdss7-21,jdss7-22机器上添加启动脚本
    /opt/kubernetes/server/bin/kube-controller-manager.sh

    #!/bin/sh
    ./kube-controller-manager \
      --cluster-cidr 172.7.0.0/16 \
      --leader-elect true \
      --log-dir /data/logs/kubernetes/kube-controller-manager \
      --master http://127.0.0.1:8080 \
      --service-account-private-key-file ./cert/ca-key.pem \
      --service-cluster-ip-range 182.168.0.0/16 \
      --root-ca-file ./cert/ca.pem \
      --v 2
    

    同时添加supervisor的配置
    /etc/supervisord.d/kube-controller-manager.ini

    [program:kube-controller-manager-7-21]
    command=/opt/kubernetes/server/bin/kube-controller-manager.sh 	;
    numprocs=1						;
    directory=/opt/kubernetes/server/bin			;
    autostart=true						;
    autorestart=true					;
    startsecs=30						;
    startretries=3						;
    exitcodes=0,2						;
    stopsignal=QUIT						;
    stopwaitsecs=10						;
    user=root						;
    redirect_stderr=true					;
    stdout_logfile=/data/logs/kubernetes/kube-controller-manager/kube-controller-manager.stdout.log ;
    stdout_logfile_maxbytes=64MB				;
    stdout_logfile_backups=4				;	
    stdout_capture_maxbytes=1MB				;
    stdout_events_enabled=false
    

    30.部署主控节点的kube-scheduler(不用签证书)

    jdss7-21,jdss7-22上添加启动脚本
    /opt/kunernetes/server/bin/kube-scheduler.sh

    #!/bin/sh
    ./kube-scheduler \
      --leader-elect \
      --log-dir /data/logs/kubernetes/kube-scheduler \
      --master http://127.0.0.1:8080 \
      # 指定找master找本机,apiserver的,用http没走证书,如果https要走证书
      --v 2
    

    mkdir -p /data/logs/kubernetes/kube-scheduler
    chmod +x *.sh

    添加到supervisor管理
    /etc/supervisord.d/kube-scheduler.ini

    [program:kube-scheduler-7-21]
    command=/opt/kubernetes/server/bin/kube-scheduler.sh 	;
    numprocs=1						;
    directory=/opt/kubernetes/server/bin			;
    autostart=true						;
    autorestart=true					;
    startsecs=30						;
    startretries=3						;
    exitcodes=0,2						;
    stopsignal=QUIT						;
    stopwaitsecs=10						;
    user=root						;
    redirect_stderr=true					;
    stdout_logfile=/data/logs/kubernetes/kube-scheduler/kube-scheduler.stdout.log ;
    stdout_logfile_maxbytes=64MB				;
    stdout_logfile_backups=4				;	
    stdout_capture_maxbytes=1MB				;
    stdout_events_enabled=false				;
    
    

    31.检查集群的健康状态

    jdss7-21,jdss7-22机器上

    创建软连接
    ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl

    检查集群的健康状态,发现有异常
    kubectl get cs

    单独检查etcd集群的健康状态是正常的

    后来发现是异常机器10.4.7.12机器上时间有问题
    时间同步后,再次看主控节点状态,一切正常了

    32.部署运算节点服务kubelet

    32.1 签发证书,kubelet对外提供https的服务,同时apiserver还要与kubelet交互

    jdss7-200机器上
    kuberctl 要给自己签发一套server的证书,同时要把kuberctl有可能要用到的节点ip都添加进去
    /opt/certs/kubelet-csr.json

    {
        "CN":"k8s-kubelet",
        "hosts":[
            "127.0.0.1",
            "10.4.7.10",
            "10.4.7.21",
            "10.4.7.22",
            "10.4.7.23",
            "10.4.7.24",
            "10.4.7.25",
            "10.4.7.26",
            "10.4.7.27",
            "10.4.7.28"
        ],
        "key":{
            "algo":"rsa",
            "size":2048
        },
        "names":[
            {
                "C":"CN",
                "ST":"beijing",
                "L":"beijing",
                "O":"od",
                "OU":"ops"
            }
        ]
    }
    

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet

    32.2 将刚签发的证书kubelet-key.pem,keublet.pem 证书拷贝到jdss7-21,jdss7-22机器上

    jdss7-21,jdss7-22
    /opt/kubernetes/server/bin/cert

    .
    ├── apiserver-key.pem
    ├── apiserver.pem
    ├── ca-key.pem
    ├── ca.pem
    ├── client-key.pem
    ├── client.pem
    ├── kubelet-key.pem
    └── kubelet.pem
    
    0 directories, 8 files
    

    32.3 创建配置文件(kubelet.kubeconfig)

    jdss7-21一台机器上执行

    • set-cluster
    # kubectl config set-cluster myk8s \
     --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
     --embed-certs=true \
     --server=https://10.4.7.10:7443 \
     --kubeconfig=kubelet.kubeconfig
    Cluster "myk8s" set.
    配置说明:
    --certificate-authority指定ca证书位置,--embed-certs代表是承载式证书,--server指定的是vip地址,这样和apiserver通信的时候就可以用vip通信了
    


    • set-credentials
    kubectl config set-credentials k8s-node \
     --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
     --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
     --embed-certs=true \
     --kubeconfig=kubelet.kubeconfig
    User "k8s-node" set.
    配置说明:
    指定了client.pem和client-key.pem,我要和apiserver去通信,apiserver是服务端,我自己是客户端,我自己拿着客户端秘钥去通信
    kubelet是作为客户端,和apiserver的服务端去通信
    
    • set-context设置上下文
    kubectl config set-context myk8s-context \
     --cluster=myk8s \
     --user=k8s-node \
     --kubeconfig=kubelet.kubeconfig
    Context "myk8s-context" created.
    
    
    • use-context切换上下文
     kubectl config use-context myk8s-context \
     --kubeconfig=kubelet.kubeconfig
    Switched to context "myk8s-context".
    

    32.4 创建配置文件(生成集群资源,存储到了etcd中)

    • 创建资源配置文件k8s-node.yaml
      /opt/kubernetes/server/bin/conf/k8s-node.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: k8s-node
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:node
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: k8s-node
    
    配置说明:
    名字为k8s-node,user是k8s-node的用户具有一个ClusterRole的集群角色,这个集群名字叫做system:node
    创建了一个k8s的用户,用户名字叫做k8s-node,给k8s-node授予集群的权限,用户k8s-node具有称为运算节点的权限
    

    使得配置文件k8s-node生效
    kubectl create -f k8s-node.yaml(创建集群资源,存储到etcd中)

    kubectl get clusterrolebinding k8s-node

    32.5 jdss7-22上copy配置文件

    /opt/kubernetes/server/bin/conf目录下执行

    32.6 准备pause基础镜像

    帮助启动pod

    kubelet是干脏活累活的
    kubelet是接受schedule的请求,schedule调度后,你这个节点把pod给拉起来
    kubelet去调度docker引擎,然后把docker引擎给拉起来
    拉呢这个动作,必须依赖一个基础镜像,基础镜像是kubelet能够控制的,先与我们的业务容器启动,让它来帮我们的业务容器去设置uts,net和apc
    业务容器还没起来的时候,我们的pod的ip已经被分配出来了
    我们需要pause的基础镜像,就几kB
    jdss7-200机器上执行

    docker images | grep apuse
    给拉取下来的pause镜像打一个harbor.od.com/public/pause:latest的标签

    docker tag f9d5de079539 harbor.od.com/public/pause:latest
    

    推送镜像

    [root@jdss7-200 harbor]# docker push harbor.od.com/public/pause:latest
    The push refers to repository [harbor.od.com/public/pause]
    5f70bf18a086: Mounted from public/nginx 
    e16a89738269: Pushed 
    latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938
    [root@jdss7-200 harbor]#
    

    32.7 制作kubelet的启动脚本

    jdss7-21机器上
    /opt/kubernetes/server/bin/kubelet.sh

    #!/bin/sh
    ./kubelet \
     --anonymous-auth=false \
     --cgroup-driver systemd \
     --cluster-dns 192.168.0.2 \
     --cluster-domain cluster.local \
     --runtime-cgroups=/systemd/system.slice \
     --kubelet-cgroups=/systemd/system.slice \
     --fail-swap-on="false" \
     --client-ca-file ./cert/ca.pem \
     --tls-cert-file ./cert/kubelet.pem \
     --tls-private-key-file ./cert/kubelet-key.pem \
     --hostname-override jdss7-21.host.com \
     --image-gc-high-threshold 20 \
     --image-gc-low-threshold 10 \
     --kubeconfig ./conf/kubelet.kubeconfig \
     --log-dir /data/logs/kubernetes/kube-kubelet \
     --pod-infra-container-image harbor.od.com/public/pause:latest \
     --root-dir /data/kubelet
    配置文件说明:
    --anonymous-auth=false 匿名登录不允许
    --cgroup-driver systemd 与docker的cgroup保持一致
    --fail-swap-on="false"默认是把swap节点关闭的
    
    

    mkdir -p /data/kubelet /data/logs/kubernetes/kube-kubelet

    创建supervisor的ini配置文件,进程管理
    /etc/supervisord.d/kubelet.ini

    [program:kube-kubelet-7-21]
    command=/opt/kubernetes/server/bin/kubelet.sh 	;
    numprocs=1						;
    directory=/opt/kubernetes/server/bin			;
    autostart=true						;
    autorestart=true					;
    startsecs=30						;
    startretries=3						;
    exitcodes=0,2						;
    stopsignal=QUIT						;
    stopwaitsecs=10						;
    user=root						;
    redirect_stderr=true					;
    stdout_logfile=/data/logs/kubernetes/kubelet/kubelet.stdout.log ;
    stdout_logfile_maxbytes=64MB				;
    stdout_logfile_backups=4				;	
    stdout_capture_maxbytes=1MB				;
    stdout_events_enabled=false				;
    

    32.8 查看node节点是否添加到集群里了

    里面ROLES是空的
    加上角色
    kubectl label node jdss7-21.host.com node-role.kubernetes.io/master=
    (主控节点)
    kubectl label node jdss7-21.host.com node-role.kubernetes.io/node=
    (计算节点)

    33.kube-proxy

    用途:连接pod网络和集群网络

    33.1 签发证书

    创建签发证书的请求文件
    jdss7-200
    /opt/certs/kube-proxy-csr.json

    {
        "CN":"system:kube-proxy",
        "key":{
            "algo":"rsa",
            "size":2048
        },
        "names":[
            {
                "C":"CN",
                "ST":"beijing",
                "L":"beijing",
                "O":"od",
                "OU":"ops"
            }
        ]
    }
    说明:此处cn名字有讲究,用的是k8s里面的角色名字
    system:node和system:kube-proxy都是k8s里面默认的角色
    kube-proxy用户默认拥有了kube-proxy的角色
    

    签发证书

    [root@jdss7-200 certs]#  cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
    2021/12/10 15:20:22 [INFO] generate received request
    2021/12/10 15:20:22 [INFO] received CSR
    2021/12/10 15:20:22 [INFO] generating key: rsa-2048
    2021/12/10 15:20:22 [INFO] encoded CSR
    2021/12/10 15:20:22 [INFO] signed certificate with serial number 324628620566520368261789249091584468600592978834
    2021/12/10 15:20:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    

    分发证书
    jdss7-21机器上
    /opt/kubernetes/server/bin/cert/

    [root@jdss7-21 cert]# scp jdss7-200:/opt/certs/kube-proxy-client.pem .
    root@jdss7-200's password: 
    kube-proxy-client.pem                                                                                             100% 1375     1.2MB/s   00:00    
    [root@jdss7-21 cert]# scp jdss7-200:/opt/certs/kube-proxy-client-key.pem .
    root@jdss7-200's password: 
    kube-proxy-client-key.pem
    

    jdss7-22机器同上

    33.2 创建配置文件

    • set-cluster
    kubectl config set-cluster myk8s \
     --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
     --embed-certs=true \
     --server=https://10.4.7.10:7443 \
     --kubeconfig=kube-proxy.kubeconfig
    Cluster "myk8s" set.
    ```bash
    - set-credentials
    ```bash
    kubectl config set-credentials kube-proxy \
     --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
     --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
     --embed-certs=true \
     --kubeconfig=kube-proxy.kubeconfig
    User "kube-proxy" set.
    
    • set-context
    kubectl config set-context myk8s-context \
    >  --cluster=myk8s \
    >  --user=kube-proxy \
    >  --kubeconfig=kube-proxy.kubeconfig
    Context "myk8s-context" created.
    
    • use-context
     kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
    Switched to context "myk8s-context"
    

    会生成一个配置文件kube-proxy.kubeconfig
    jdss7-22将jdss7-21机器上创建出来的配置文件直接拷贝过去,放到相同目录下就可以

    33.3 加载ipvs模块

    kube-proxy有3种流量调度的模式
    1.user-space已经废弃了,内核态和用户态的交互,太费资源
    2.iptables
    3.ipvs

    启动内核里面ipvs的模块
    jdss7-21,jdss7-22机器上都做
    /root/ipvs.sh

    #!/bin/bash
    ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
    for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*")
    do
    	/sbin/modinfo -F filename $i &>/dev/null
    	if [[ $? -eq 0 ]];then
    		/sbin/modprobe $i
    	fi
    done
    

    执行后,出现如下结果

    几个重要说明
    ip_vs_wrr:加权轮训
    ip_vs_wlc:最短连接时间

    用的比较多的是,加权最小连接,最小连接,轮训,加权轮训
    推荐使用动态算法ip_vs_nq,永不排队

    33.4 创建启动脚本

    /opt/kubernetes/server/bin/kube-proxy.sh

    #!/bin/sh
    ./kube-proxy \
     --cluster-cidr 172.7.0.0/16 \
     --hostname-override jdss7-21.host.com \
     --proxy-mode=ipvs \
     --ipvs-scheduler=nq \
     --kubeconfig ./conf/kube-proxy.kubeconfig
    

    创建日志文件目录
    mkdir -p /data/logs/kubernetes/kube-proxy

    33.5 加入到supervisor中

    /etc/supervisord.d/kube-proxy.ini

    [program:kube-proxy-7-21]
    command=/opt/kubernetes/server/bin/kube-proxy.sh 	;
    numprocs=1						;
    directory=/opt/kubernetes/server/bin			;
    autostart=true						;
    autorestart=true					;
    startsecs=30						;
    startretries=3						;
    exitcodes=0,2						;
    stopsignal=QUIT						;
    stopwaitsecs=10						;
    user=root						;
    redirect_stderr=true					;
    stdout_logfile=/data/logs/kubernetes/kube-proxy/kube-proxy.stdout.log ;
    stdout_logfile_maxbytes=64MB				;
    stdout_logfile_backups=4				;	
    stdout_capture_maxbytes=1MB				;
    stdout_events_enabled=false				;
    

    kube-proxy里面内嵌了lvs,进行端口转发(节点网络,cluster网络,node网络)维护3个网络的关系
    yum install ipvsadm -y

    34.验证集群

    任意一个运算节点,创建一个资源配置清单
    我们选择jdss7-21机器上
    /root/nginx-ds.yaml

    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: nginx-ds
    spec:
      template:
        metadata:
          labels:
            app: nginx-ds
        spec:
          containers:
          - name: my-nginx
            image: harbor.od.com/public/nginx:v1.7.9
            ports:
            - containerPort: 80
    
    

    执行命令 kubectl create -f /root/nginx-ds.yaml
    daemonset.extensions/nginx-ds created

    跨宿主机docker不能通信,172.7.22.2机器被分配到了jdss7-22的机器上,

    35.学习依赖的资源说明

    • 实现一整套k8s生态的搭建,并实战交付一套dubbo(java)微服务,我们要一步步实现以下功能:
      • 持续集成
      • 配置中心
      • 监控系统
      • 日志收集分析系统
      • 自动化运维平台(最终实现基于K8S的开源Paas平台)
    • 学习依赖的资源需求如下:
      2c/2g/5g * 3 + 4c/8g/50g * 2
      与课程中的环境(ip规划和部署的服务)保持一致
    • 资源获取方式
      笔记本加内存
      有条件的可以自建服务器工作站
      租用阿里云主机

    36.kubectl,coreDns,Dashboard

    原创:做时间的朋友
  • 相关阅读:
    asp.net core 使用 signalR(一)
    实现一个基于码云的Storage
    架构设计原则
    给 asp.net core 写个中间件来记录接口耗时
    [svc]ext4文件删除&访问原理
    [svc]为何linux ext4文件系统目录默认大小是4k?
    [svc]traceroute(udp+icmp)&tracert(icmp)原理
    [jk]服务器远控卡及kvm切换器
    [svc]find+xargs/sed&sed后向引用+awk多匹配符+过滤行绝招总结&&产生随机数
    [svc]linux正则及grep常用手法
  • 原文地址:https://www.cnblogs.com/PythonOrg/p/15593425.html
Copyright © 2020-2023  润新知