• K8s系列【KubeSphere平台的安装基于K8s集群】


    一、Kubernetes上安装KubeSphere

    安装步骤

    • 选择4核8G(master)、8核16G(node1)、8核16G(node2) 三台机器,按量付费进行实验,CentOS7.9,查看系统版本cat /etc/redhat-release,安全组开放k8s-port的30000-32767端口

    • 安装Docker

    • 安装Kubernetes

    • 安装KubeSphere前置环境

    • 安装KubeSphere

    1.安装docker

    下面的命令可以直接复制所有,也可以新建一个shell脚本,把命令粘进去以脚本方式执行。

    sudo yum remove docker*
    sudo yum install -y yum-utils
    
    #配置docker的yum地址
    sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    
    #安装指定版本
    sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
    
    #	启动&开机启动docker
    systemctl enable docker --now
    
    # docker加速配置
    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      #可以换成自己的镜像地址
      "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    2、安装Kubernetes

    2.1、基本环境

    • 每个机器使用内网ip互通

    • 每个机器配置自己的hostname,不能用localhost

    给三个节点分别设置hostname,设置完之后重新用shell工具连接

    hostnamectl set-hostname k8s-master
    hostnamectl set-hostname k8s-node1
    hostnamectl set-hostname k8s-node2
    

    把下面配置在三个节点分别直接粘贴执行,或者粘贴到shell脚本中执行

    # 将 SELinux 设置为 permissive 模式(相当于将其禁用)
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    #关闭swap
    swapoff -a  
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    #允许 iptables 检查桥接流量
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sudo sysctl --system
    

    2.2、安装kubelet、kubeadm、kubectl

    在master节点执行ip a命令,查看自己master节点的ipv4地址,并把替换掉下面最后一行中的ip172.31.0.4,然后直接复制下面命令或者以shell脚本执行,分别在三个节点执行下面命令。

    #配置k8s的yum源地址
    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    
    #安装 kubelet,kubeadm,kubectl
    sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9
    
    #启动kubelet
    sudo systemctl enable --now kubelet
    
    #所有机器配置master域名
    echo "172.31.0.4  k8s-master" >> /etc/hosts
    

    上面命令在三台机器上执行完毕后,需要验证是否安装成功。分别在三台机器上执行ping k8s-master命令,若都能ping通,说明安装成功。

    3、初始化master节点

    • 3.1 初始化

      把下面第二行的ip地址172.31.0.4换成自己master节点的ip,需要下载镜像,比较慢,多等一会。

    kubeadm init \
    --apiserver-advertise-address=172.31.0.4 \
    --control-plane-endpoint=k8s-master \
    --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    --kubernetes-version v1.20.9 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=192.168.0.0/16
    
    • 3.2 记录关键信息

      记录上一步master执行完成后的日志

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    #若想要使用多个master节点,执行下面这条命令
      kubeadm join k8s-master:6443 --token 3vckmv.lvrl05xpyftbs177 \
        --discovery-token-ca-cert-hash sha256:1dc274fed24778f5c284229d9fcba44a5df11efba018f9664cf5e8ff77907240 \
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join k8s-master:6443 --token 3vckmv.lvrl05xpyftbs177 \
        --discovery-token-ca-cert-hash sha256:1dc274fed24778f5c284229d9fcba44a5df11efba018f9664cf5e8ff77907240
    
    • 3.3 在master节点执行上一步日志中的第3、4、5行的命令(直接去3.2的日志中去复制,不要复制下面的命令)。

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
      

      执行完上面的命令后,需要执行kubect get nodes命令验证,若执行完出来了nodes的信息,说明执行成功。

    • 3.4 安装Calico网络插件

      curl https://docs.projectcalico.org/manifests/calico.yaml -O
      
      kubectl apply -f calico.yaml
      
    • 3.5 加入worker节点

      在k8s-node1和k8s-node2节点分别执行3.2日志中的最后两行命令(直接去3.2的日志中去复制,不要复制下面的命令)。

    kubeadm join k8s-master:6443 --token 3vckmv.lvrl05xpyftbs177 \
        --discovery-token-ca-cert-hash sha256:1dc274fed24778f5c284229d9fcba44a5df11efba018f9664cf5e8ff77907240
    

    验证上面是否执行成功:在master节点执行kubectl get nodes命令查看node1和node2是否加入进来了,如果状态是NotReady,可以执行kubectl get pod -A查看各个pod的状态是否READY,一般5分钟左右就好了。

    4、安装KubeSphere前置环境

    • 4.1参考文档:nfs安装教程

    • 4.2 安装metrics-server(集群指标监控组件)

    这里其实在安装kubesphere的时候也会自动安装,但是容易下载失败,所以这里提前手动安装。

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
      ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-view: "true"
      name: system:aggregated-metrics-reader
    rules:
      - apiGroups:
          - metrics.k8s.io
        resources:
          - pods
          - nodes
        verbs:
          - get
          - list
          - watch
      ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    rules:
      - apiGroups:
          - ""
        resources:
          - pods
          - nodes
          - nodes/stats
          - namespaces
          - configmaps
        verbs:
          - get
          - list
          - watch
      ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
      - kind: ServiceAccount
        name: metrics-server
        namespace: kube-system
      ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
      - kind: ServiceAccount
        name: metrics-server
        namespace: kube-system
      ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
      - kind: ServiceAccount
        name: metrics-server
        namespace: kube-system
      ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
      selector:
        k8s-app: metrics-server
      ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
            - args:
                - --cert-dir=/tmp
                - --kubelet-insecure-tls
                - --secure-port=4443
                - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
                - --kubelet-use-node-status-port
              #这里可以自己把metrics-server做到自己的阿里云镜像里面,并把下面替换成自己的镜像地址
              image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
              imagePullPolicy: IfNotPresent
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /livez
                  port: https
                  scheme: HTTPS
                periodSeconds: 10
              name: metrics-server
              ports:
                - containerPort: 4443
                  name: https
                  protocol: TCP
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /readyz
                  port: https
                  scheme: HTTPS
                periodSeconds: 10
              securityContext:
                readOnlyRootFilesystem: true
                runAsNonRoot: true
                runAsUser: 1000
              volumeMounts:
                - mountPath: /tmp
                  name: tmp-dir
          nodeSelector:
            kubernetes.io/os: linux
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          volumes:
            - emptyDir: {}
              name: tmp-dir
      ---
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      versionPriority: 100 
    

    验证metrics-server是否安装成功:

    • 执行命令:kubectl get pod -A,查看metrics-server是否Running。
    • 执行命令: kubectl top nodes,查看集群所有节点的cpu和内存资源使用情况。
    • 执行命令:kubectl top pods -A,查看所有命名空间的pod的资源使用情况。

    至此,kubeSphere的前置工作已完成!

    4、安装kubeSphere

    https://kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/introduction/overview/

    4.1 下载核心文件,若下载不了,可以直接从下面的链接中复制

    https://www.yuque.com/leifengyang/oncloud/gz1sls#EbmTY

    yum install -y  wget
    
    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
    
    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
    
    yum install -y vim 
    
    vim cluster-configuration.yaml
     #修改spec.etcd.monitoring的值为true
     #修改spec.etcd.endpointIps的值为master节点的内网ip
     #修改spec.common.redis.enabled的值为true
     #修改spec.common.openldap的值为true,开启轻量级目录协议
     #修改spec.alerting.enabled的值为true
     #修改spec.auditing.enabled的值为true
     #修改spec.devops.enabled的值为true
     #修改spec.events.enabled的值为true
     #修改spec.logging.enabled的值为true
     #修改spec.network.networkpolicy.enabled的值为true
     #修改spec.network.ippool.type的值为calico
     #修改spec.openpitrix.store.enabled的值为true,开启应用商店
     #修改spec.servicemesh.enabled的值为true,开启微服务治理功能
     #修改spec.kubeedge.enabled的值为true,开启边缘服务,可以不打开,体验不到
    

    4.2 执行安装命令

    kubectl apply -f kubesphere-installer.yaml
    
    kubectl apply -f cluster-configuration.yaml
    

    验证:执行kubectl get pod -A,查看ks-installer是否Runnig,大概3分钟左右会Running。

    4.3 查看安装进度,大概15分钟

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
    
    
    • 在master节点执行kubectl get pod -A,等所有的pod都是正常运行之后再访问。

    • 拓展:

      如何确定pod是拉取镜像慢?还是其他原因导致的pod起不来呢?

    kubectl describe pod -n kubesphere-monitoring-system(名称空间) alertmanager-main-1(pod名字)
    

    查看日志,如果最后一行卡在Pulling image "prom/alertmanager:v0.21.0",这种等就行了。如果不是,就查看具体原因,分析后再处理。

    ​ 这里注意:prometheus-k8s-0会Bind etcd的时候失败

    kubectl describe pod -n kubesphere-monitoring-system prometheus-k8s-0
    

    ​ 解决etcd监控证书找不到的问题:再等5分钟左右

    kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
    
    • 在master节点执行kubectl get pod -A,等所有的pod都是正常运行之后再访问。

    • 访问任意机器的 30880端口,注意要在每一台机器的安全组里放行30880端口,由于前面放行了30000-32767端口,所以这里直接访问即可。

    ​ 账号 : admin

    ​ 密码 : P@88w0rd

    首次登陆,修改密码!

  • 相关阅读:
    vue cli3 打包到tomcat上报错问题
    前端html转pdf
    原生js上传图片遇到的坑(axios封装)
    vue slot的使用(transform动画)
    vue购物车动画效果
    关于el-select 单选与多选切换的时候报错的解决办法
    vue html属性绑定
    关于element ui滚动条使用
    css3flex布局实现商品列表 水平垂直居中 上下布局
    vue 项目 路由 router.push、 router.replace 和 router.go
  • 原文地址:https://www.cnblogs.com/hujunwei/p/16873727.html
Copyright © 2020-2023  润新知