• kubeadm安装k8s集群


    系统配置

    在安装之前,需要先做如下准备。两台CentOS 7.4主机如下:

    cat /etc/hosts
    192.168.61.11 node1
    192.168.61.12 node2

    如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的"Check required ports"一节。 这里简单起见在各节点禁用防火墙:

    systemctl stop firewalld
    systemctl disable firewalld

    禁用SELINUX:

    setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

    禁用swap分区

    swapoff -a && sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab 

    加载模块

    cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

    创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

     cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    net.ipv4.tcp_tw_recycle=0

    vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
    vm.overcommit_memory=1 # 不检查物理内存是否够用

    vm.panic_on_oom=0 # 开启 OOM
    fs.inotify.max_user_instances=8192
    fs.inotify.max_user_watches=1048576
    fs.file-max=52706963
    fs.nr_open=52706963
    net.ipv6.conf.all.disable_ipv6=1
    net.netfilter.nf_conntrack_max=2310720
    EOF
    #生效
    sysctl -p /etc/sysctl.d/k8s.conf

    安装插件

    yum install ipset ipvsadm conntrack-tools.x86_64 -y

    安装docker

    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager 
        --add-repo 
        https://download.docker.com/linux/centos/docker-ce.repo
    yum install -y --setopt=obsoletes=0 
      docker-ce
    
    systemctl start docker
    systemctl enable docker

    使用kubeadm部署kubernetes

    下面在各个节点安装kubeadm和kubelet

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
            https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    yum -y install kubelet kubeadm kubectl
    systemctl enable kubelet.service

    安装所需镜像

    下面在各个节点安装所需要的的镜像

    for i in `kubeadm config images list`; do   
        imageName=${i#k8s.gcr.io/}  
        docker pull registry.aliyuncs.com/google_containers/$imageName      
        docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName  
        docker rmi registry.aliyuncs.com/google_containers/$imageName
    done

    使用kubeadm init初始化集群

    在master节点操作

    kubeadm init  
    --kubernetes-version=v1.12.0  
    --pod-network-cidr=10.244.0.0/16  
    --apiserver-advertise-address=192.168.61.11 
    --ignore-preflight-errors=Swap

    [init] using Kubernetes version: v1.12.0
    [preflight] running pre-flight checks
            [WARNING Swap]: running with swap on is not supported. Please disable swap
    [preflight/images] Pulling images required for setting up a Kubernetes cluster
    [preflight/images] This might take a minute or two, depending on the speed of your internet connection
    [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [preflight] Activating the kubelet service
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.11]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    [certificates] Generated sa key and public key.
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
    [init] this might take a minute or longer if the control plane images have to be pulled
    [apiclient] All control plane components are healthy after 26.503672 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
    [markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"
    [markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
    [bootstraptoken] using token: zalj3i.q831ehufqb98d1ic
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic --discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa
    • [kubelet] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
    • [certificates]生成相关的各种证书
    • [kubeconfig]生成相关的kubeconfig文件
    • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
    • 下面的命令是配置常规用户如何使用kubectl访问集群:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    在node节点操作

     kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic --discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa

    安装Pod Network

    mkdir -p ~/k8s/
    cd ~/k8s
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    kubectl apply -f  kube-flannel.yml
    
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created

    查看一下集群状态确认各个组件正常

    kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    etcd-0               Healthy   {"health": "true"}

    在slave节点操作

    kubeadm join 192.168.61.11:6443 
    --token zalj3i.q831ehufqb98d1ic 
    --discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa 
    --ignore-preflight-errors=Swap

    集群初始化如果遇到问题,可以使用下面的命令进行清理:

    kubeadm reset
    ifconfig cni0 down
    ip link delete cni0
    ifconfig flannel.1 down
    ip link delete flannel.1
    rm -rf /var/lib/cni/
  • 相关阅读:
    Python
    版本控制
    后台
    前端
    提升权限 关闭系统
    SC命令(windows服务开启/禁用)
    获取当前电脑全部网络连接名字
    x64 win64编译环境下ADO链接Access数据库的问题解决
    Netsh命令-网络禁用开启
    windows主机防护
  • 原文地址:https://www.cnblogs.com/fat-girl-spring/p/14012466.html
Copyright © 2020-2023  润新知