• arm64平台 Kubernetes 及 Harbor 离线部署方案



    前言

    随着国产化浪潮的兴起,部分企业及部门开始采用国产服务器,使用国产操作系统。本文针对国产服务器(长城)及国产操作系统(麒麟)实现离线部署 Kubernetes,相关离线程序包、配置文件、安装说明文档将在文中给出。


    安装的组件如下:

    程序包 版本
    docker-ce 19.03.14
    kubeadm、kubelet、kubectl v1.19.7
    flannel v0.14.0
    traefik 2.1.2
    kube-dashboard v2.0.5
    harbor v1.9.1

    环境介绍


    服务器型号


    image-20210828135752853


    操作系统信息


    image-20210828135849998


    部署规划


    主机名 IP 用途
    k8s-master 192.168.1.161 k8s 主节点
    k8s-node1 192.168.1.162 k8s 计算节点-1
    k8s-node2 192.168.1.163 k8s 计算节点-2
    harbor 192.168.1.164 k8s 私有镜像仓库

    部署流程


    Kubernetes 集群部署流程


    • docker-ce 安装
    • kubeadm 、kubectl、kubelet 安装
    • 初始化 Kubernetes 集群
    • 为 Kubernetes 集群加入 node 节点
    • 安装部署 Flannel
    • 安装部署 Traefik
    • 安装部署 kube-dashboard

    Harbor 镜像仓库部署流程


    • docker-ce 安装
    • docker-compose 安装
    • 修改 harbor.yml 配置文件,初始化 harbor 仓库


    程序包版本


    程序包 版本
    docker-ce 19.03.14
    kubeadm、kubelet、kubectl v1.19.7
    flannel v0.14.0
    traefik 2.1.2
    kube-dashboard v2.0.5


    离线安装包


    离线安装包已在测试、生产环境确认无误,没有任何遗漏。


    离线安装包括以下内容:

    1. 安装组件程序包
    2. 各个组件必需的镜像文件
    3. 每个组件安装详细说明文档

    创作、测试、收集不易,还请需要的朋友在文章末尾援助一个月百度网盘会员的费用,我私信分享给您。



    Kubernetes 安装部署


    系统初始化


    1. 关闭 selinux 、iptables
    [root@k8s-master(192.168.1.161) ~]#systemctl disable firewalld
    [root@k8s-master(192.168.1.161) ~]#sed -i 's@SELINUX=enforcing@SELINUX=disabled@g' /etc/selinux/config
    [root@k8s-master(192.168.1.161) ~]#reboot
    

    1. ntp 时间同步
    [root@k8s-master(192.168.1.161) ~]#ntpdate tiger.sina.com.cn
    


    1. 修改主机名
    [root@k8s-master(192.168.1.161) ~]#cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.1.161	k8s-master
    192.168.1.162	k8s-node1
    192.168.1.163	k8s-node2
    192.168.1.164	harbor
    
    [root@k8s-master(192.168.1.161) ~]#for i in {2..4}; do scp /etc/hosts 192.168.1.16${i}:/etc/;done
    


    1. 修改文件句柄数
    [root@k8s-master(192.168.1.161) ~]#ulimit -SHn 655350
    [root@k8s-master(192.168.1.161) ~]#cat << 'EOF' >> /etc/rc.local
    > ulimit -SHn 655350
    > EOF
    [root@k8s-master(192.168.1.161) ~]#chmod +x /etc/rc.local
    [root@k8s-master(192.168.1.161) ~]#vim /etc/sysctl.d/99-sysctl.conf
    ...
    net.ipv4.ip_forward=1
    ...
    [root@k8s-master(192.168.1.161) ~]#sysctl --system
    
    


    1. 如果有公网,直接使用默认yum源。如没有公网请在有公网的主机下载包,自行制作 repo 仓库,可私信我获取。
    [root@k8s-master(192.168.1.161) ~]#yum repolist
    repo id                                                              repo name
    docker-ce                                                            docker-ce-stable
    ks10-adv-os                                                          ks10-adv-os
    kubernetes                                                           kubernetes
    

    1. 准备安装程序包
    [root@k8s-master(192.168.1.161) ~]#ls
    anaconda-ks.cfg  initial-setup-ks.cfg  kubernetes-arm64.zip
    
    [root@k8s-master(192.168.1.161) ~]#unzip kubernetes-arm64.zip
    Archive:  kubernetes-arm64.zip
       creating: kubernetes-arm64/
      inflating: kubernetes-arm64/README.txt
     extracting: kubernetes-arm64/docker-ce-19.03-arm64.tar.gz
     extracting: kubernetes-arm64/docker-compose-arm64-1.22.tar.gz
     extracting: kubernetes-arm64/flannel-v0.14.0-arm64.tar.gz
     extracting: kubernetes-arm64/harbor-v1.9.1-arm64.tar.gz
     extracting: kubernetes-arm64/k8s-rpm-images-arm64.tar.gz
     extracting: kubernetes-arm64/kube-dashboard-v2.0.5-arm64.tar.gz
     extracting: kubernetes-arm64/traefik-2.1.2-arm64.tar.gz
     
    [root@k8s-master(192.168.1.161) ~]#cd kubernetes-arm64
    
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#ls
    docker-ce-19.03-arm64.tar.gz      flannel-v0.14.0-arm64.tar.gz  k8s-rpm-images-arm64.tar.gz         README.txt
    docker-compose-arm64-1.22.tar.gz  harbor-v1.9.1-arm64.tar.gz    kube-dashboard-v2.0.5-arm64.tar.gz  traefik-2.1.2-arm64.tar.gz
    
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cat README.txt
    # Kubernetes 安装顺序
    
    1. docker-ce-19.03-arm64.tar.gz
    2. k8s-rpm-images-arm64.tar.gz
    3. flannel-v0.14.0-arm64.tar.gz
    4. traefik-2.1.2-arm64.tar.gz
    5. kube-dashboard-v2.0.5-arm64.tar.gz
    
    
    # harbor 安装顺序
    
    1. docker-ce-19.03-arm64.tar.gz
    2. docker-compose-arm64-1.22.tar.gz
    3. harbor-v1.9.1-arm64.tar.gz
    
    
    说明:
    每个 tar.gz 代表一种组件,解压后会有 README.md 文件,按照操作即可食用。
    


    k8s-master 节点安装


    安装docker-ce

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf docker-ce-19.03-arm64.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd docker-ce-19.03-arm64
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#cat README.md
    ## docker 安装方式:
    yum localinstall *.rpm -y
    
    ## 拷贝 docker 目录到 /etc/ 下
    cp -a docker /etc/
    
    ## 启动 docker
    systemctl enable docker ; systemctl start docker ; docker info
    
    ################ 按照上面 README 操作 ################
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#yum localinstall *.rpm -y
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#cp -a docker /etc/
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#systemctl enable docker ; systemctl start docker ; docker info
    

    安装 kubernetes 程序包


    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf k8s-rpm-images-arm64.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd k8s-rpm-images/v1.19.7/
    ---------------README---------------
    ## 关闭 swap
    swapoff -a
    sed -i '/swap/ s$^(.*)$#1$g' /etc/fstab
    
    
    
    ## 安装 k8s 程序包
    yum localinstall *.rpm -y
    kubeadm completion bash > /etc/bash_completion.d/kubeadm
    kubectl completion bash > /etc/bash_completion.d/kubectl
    
    断开,重新连接会话操作
    
    ## 生成并编辑配置文件
    kubeadm config print init-defaults > init-default.yaml
    
    ### init-defualt.yaml 配置文件如下:
    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 2400h0m0s   ## 可以稍稍延长时间
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.1.166  ## 修改为你本机的IP
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: master
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers   ## 修改为阿里云的镜像仓库
    kind: ClusterConfiguration
    kubernetesVersion: v1.19.0
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 10.244.0.0/16  ## pod 网络网段
    scheduler: {}
    
    ## 导入镜像
    注意:这里 k8s-v1.19.7 用的镜像就是 v1.19.0
    docker load < k8s-v1.19.0-arm64.tar.gz
    
    
    ## 初始化 k8s 集群
    kubeadm init --config ./init-default.yaml
    
    ...
    Your Kubernetes control-plane has initialized successfully!
    ...
    出现如上信息初始化成功.
    
    执行:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    记录如下信息在node 节点执行:
    kubeadm join 192.168.1.166:6443 --token abcdef.0123456789abcdef 
        --discovery-token-ca-cert-hash sha256:cd20a4bce0edb28588ae12b8ed36057ed535bfc4fa38c5d24d16e9614fb8a6ab
    
    ## 查看是否成功
    kubectl get nodes
    NAME     STATUS     ROLES    AGE     VERSION
    master   NotReady   master   4m59s   v1.19.7
    
    ## 下一步安装 flannel
    查看 flannel 压缩包 README.md
    --------------------------------------
    

    每个组件的压缩包下面都有一个安装指南 README.md 文档,按照安装即可。

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#swapoff -a
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#sed -i '/swap/ s$^(.*)$#1$g' /etc/fstab
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#exit
    // 重新连接
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#kubeadm config print init-defaults > init-default.yaml
    
    ### 修改初始化文件,需要重点注意 ###
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#vim init-default.yaml
    
    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 240000h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.1.161    ## k8s-master 主机IP
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: k8s-master
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers   ## 镜像仓库地址
    kind: ClusterConfiguration
    kubernetesVersion: v1.19.0
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 10.244.0.0/16		## 定义 pod 网络
    scheduler: {}
    

    导入镜像文件:

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#docker load < k8s-v1.19.0-arm64.tar.gz
    

    初始化集群:

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#kubeadm init --config ./init-default.yaml
    ...
    Your Kubernetes control-plane has initialized successfully!
    ...
    
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#  mkdir -p $HOME/.kube
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    [root@k8s-master(192.168.1.161) ~]#kubectl get nodes
    NAME         STATUS     ROLES    AGE   VERSION
    k8s-master   NotReady   master   42s   v1.19.7
    


    安装 flannel 网络插件


    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf flannel-v0.14.0-arm64.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd flannel-v0.14.0-arm64/
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#cat README.md
    ## 导入 flannel images
    注意:node 节点都需要导入
    docker load < flannel-v0.14.0-arm64-image.tar.gz
    
    ## 执行 yml 文件
    kubectl apply -f kube-flannel.yml
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    
    ## 查看节点状态
    kubectl get nodes
    NAME     STATUS   ROLES    AGE   VERSION
    master   Ready    master   62m   v1.19.7
    
    STATUS 为 Ready 则 flannel 安装成功.
    

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#docker load < flannel-v0.14.0-arm64-image.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#kubectl apply -f kube-flannel.yml
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#kubectl get pod -n kube-system
    NAME                                 READY   STATUS    RESTARTS   AGE
    coredns-6d56c8448f-bj55k             0/1     Pending   0          2m7s
    coredns-6d56c8448f-pqrs2             0/1     Pending   0          2m7s
    etcd-k8s-master                      1/1     Running   0          2m15s
    kube-apiserver-k8s-master            1/1     Running   0          2m15s
    kube-controller-manager-k8s-master   1/1     Running   0          2m15s
    kube-flannel-ds-9dsjt                1/1     Running   0          12s    ### 重点关注
    kube-proxy-npf66                     1/1     Running   0          2m7s
    kube-scheduler-k8s-master            1/1     Running   0          2m15s
    
    [root@k8s-master(192.168.1.161) ~]#kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    k8s-master   Ready    master   2m50s   v1.19.7
    
    ### 集群已经从 NotReady 变为 Ready,集群搭建成功。
    

    安装 traefik 网关


    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf traefik-2.1.2-arm64.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd traefik/
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#cat README.md
    ## 导入镜像
    注意:集群每个节点都需要导入
    docker load < traefik-arm64-2.1.2.tar.gz
    
    
    ## 执行 yaml 清单文件,顺序如下:
    traefik-crd.yaml
    traefik-rbac.yaml
    traefik-configmap.yaml
    traefik-deployment.yaml
    traefik-dashboard-route.yaml
    
    
    ## 查看 traefik
    kubectl get pod -o wide
    NAME                                         READY   STATUS    RESTARTS   AGE   IP           NODE
    traefik-ingress-controller-649b45b97-5r7w7   1/1     Running   0          31s   10.244.0.7   master
    
    ## 在 windows 上修改 C:WindowsSystem32driversetchosts 文件,解析如下:
    192.168.1.166		www.test.com
    
    注意:192.168.1.166 为上面 pod 运行所在的 master 节点IP地址
    
    ## 通过浏览器访问 www.test.com
    出现 traefik 页面表示配置成功.
    

    按照 README.md 安装即可:

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#docker load < traefik-arm64-2.1.2.tar.gz
    ##### 执行期间有 Warning 信息忽略 #####
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-crd.yaml
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-rbac.yaml
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-configmap.yaml
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-deployment.yaml
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-dashboard-route.yaml
    
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl get pod
    NAME                                         READY   STATUS    RESTARTS   AGE
    traefik-ingress-controller-649b45b97-qzdr7   1/1     Running   0          18s
    
    # 启动成功
    

    在windows 上做解析,访问 http://www.test.com/ 是否能打开 traefik 控制台

    image-20210828160717701


    添加如下一条解析:

    image-20210828160836016

    保存退出,通过浏览器访问域名 http://www.test.com

    image-20210828161521605


    能打开 traefik dashboard 就表示 traefik 安装成功。


    安装 kube-dashboard 界面


    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf kube-dashboard-v2.0.5-arm64.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd kube-dashboard-v2.0.5-arm64/
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#cat README.md
    ## 创建 kubernetes-dashboard 名称空间
    kubectl create ns kubernetes-dashboard
    namespace/kubernetes-dashboard created
    
    ## 创建 kubernetes-dashboard-certs
    注意:不创建下面 kubernetes-dashboard pod 无法启动成功
    
    mkdir -p /tmp/kube-dashboard-key/ && cd /tmp/kube-dashboard-key/
    openssl genrsa -out dashboard.key 2048
    openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.1.131'  ## 这里为自己主机的IP地址
    openssl x509 -days 3650 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
    cd -
    kubectl create secret generic kubernetes-dashboard-certs --from-file=/tmp/kube-dashboard-key/dashboard.key --from-file=/tmp/kube-dashboard-key/dashboard.crt -n kubernetes-dashboard
    
    
    
    
    ## 导入 dashboard 镜像文件
    注意:集群内每个节点都需要导入
    docker load < dashboard-v2.0.5-arm64-images.tar.gz
    
    ## 执行 ymal 文件
    kubectl apply -f  kubernetes-dashboard.yaml
    namespace/kubernetes-dashboard created
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created
    
    
    ## 查看 dashboard
    kubectl get pod,svc -n kubernetes-dashboard
    
    NAME                                             READY   STATUS    RESTARTS   AGE
    pod/dashboard-metrics-scraper-678d548797-hjq5z   1/1     Running   0          4m46s
    pod/kubernetes-dashboard-664667b775-5lsdd        1/1     Running   0          42s
    
    NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    service/dashboard-metrics-scraper   ClusterIP   10.96.68.73    <none>        8000/TCP        4m46s
    service/kubernetes-dashboard        NodePort    10.104.22.95   <none>        443:31234/TCP   4m46s
    
    
    ## 通过页面访问
    https://192.168.1.131:31234/
    
    ## 获取token的方式
    kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace)|
    awk '/^token/{print $2}'
    

    按照 README.md 文档安装即可

    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl create ns kubernetes-dashboard
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#mkdir -p /tmp/kube-dashboard-key/ && cd /tmp/kube-dashboard-key/
    [root@k8s-master(192.168.1.161) /tmp/kube-dashboard-key]#openssl genrsa -out dashboard.key 2048
    [root@k8s-master(192.168.1.161) /tmp/kube-dashboard-key]#cd -
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl create secret generic kubernetes-dashboard-certs --from-file=/tmp/kube-dashboard-key/dashboard.key --from-file=/tmp/kube-dashboard-key/dashboard.crt -n kubernetes-dashboard
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#docker load < dashboard-v2.0.5-arm64-images.tar.gz
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl apply -f  kubernetes-dashboard.yaml
    [root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl get pod,svc -n kubernetes-dashboard
    NAME                                             READY   STATUS    RESTARTS   AGE
    pod/dashboard-metrics-scraper-678d548797-vq7t6   1/1     Running   0          15s
    pod/kubernetes-dashboard-664667b775-2pbkf        1/1     Running   0          15s
    
    NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
    service/dashboard-metrics-scraper   ClusterIP   10.100.121.127   <none>        8000/TCP        15s
    service/kubernetes-dashboard        NodePort    10.105.90.90     <none>        443:31234/TCP   15s
    

    通过浏览器访问 https://192.168.1.161:31234/#/login 注意:这里是 https

    image-20210828162626888


    获取 token 的方式,在 README.md 中有说明:

    [root@k8s-master(192.168.1.161) ~]#kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace)| awk '/^token/{print $2}'
    eyJhbGciOiJSUzI1NiIsImtpZCI6IlF1WHBWS2xzUEQzazctaE1fbTNENmpIZXFwZkZ2WUFGUEN5YzVFVWV0WVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1oaDlqZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFhMWFiNjczLTE4ZTktNDFmMC1hNmY0LTg1NDFmYjNkNjBjMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.eBNteuGfC4xGicv9ghq2ZcfE2ju2g3QAp7JxUlZl4yo0HM9YAHgBKYaOok8oKBvFEWaKkC0EUHo1xhzrhK9rOF2OvSrQgpzRkztBRjpZh0Gmdk-8Jvhr8OAKOPiXn_FBmmJz6H2KAKUGWtihQx_YJEWiB18Ht97o2dh1bTsP3FdjoCbLe4Xs3nuQ4tM_tqy-CqzkcoQ1wAK-HhYC-dPV0GdMMhLXaXP6UsaYiyxkiAkH-71EgXbTqafSK2e_vdyMMaE0A3LlBELnOCSlGSzQCpDywLS5BdihU09M3MokMG9WVEDqpyWhl4IvtY9qhp9Jql9EyiQdQNa8hpWPAuQwnw
    

    复制如上 token 到页面的文本框中点击登录

    image-20210828162804065


    到此, k8s-master 节点安装完毕,接下来为 k8s 添加 node 节点。


    k8s-node 节点安装


    安装 docker-ce

    [root@k8s-node1 docker-ce-19.03-arm64]# yum localinstall *.rpm -y
    [root@k8s-node1 docker-ce-19.03-arm64]# cp -a docker /etc/
    [root@k8s-node1 docker-ce-19.03-arm64]# systemctl enable docker ; systemctl start docker ; docker info
    

    安装 kubernetes 程序包

    [root@k8s-node1 v1.19.7]# swapoff -a
    [root@k8s-node1 v1.19.7]# sed -i '/swap/ s$^(.*)$#1$g' /etc/fstab
    [root@k8s-node1 v1.19.7]# yum localinstall *.rpm -y
    [root@k8s-node1 v1.19.7]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
    [root@k8s-node1 v1.19.7]# kubectl completion bash > /etc/bash_completion.d/kubectl
    

    node 导入 master 镜像文件


    1. 备份 master 节点 所有镜像文件
    [root@k8s-master(192.168.1.161) ~]#docker save $(docker images | awk 'NR>1{print $1":"$2}' | awk '{printf "%s ",$0}') | gzip > k8s-arm-1.19.7.tar.gz
    [root@k8s-master(192.168.1.161) ~]#scp k8s-arm-1.19.7.tar.gz k8s-node1:/root/
    

    1. node 导入镜像文件
    [root@k8s-node1 ~]# docker load < k8s-arm-1.19.7.tar.gz
    

    加入 k8s 集群

    加入的命令在 k8s-master 初始化完成后显示出来的。

    [root@k8s-node1 ~]# kubeadm join 192.168.1.161:6443 --token abcdef.0123456789abcdef >     --discovery-token-ca-cert-hash sha256:b0f1cd12aed05afad71577b79ba2f9c37e2472b33b08c1c712753b9b455424a1
    

    检查是否加入集群

    在 k8s-master 节点上查看

    [root@k8s-master(192.168.1.161) ~]#kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    k8s-master   Ready    master   53m   v1.19.7k8s-node1    Ready    <none>   48s   v1.19.7
    

    另一台 node 节点同样如上操作。加入集群后,集群如下:

    [root@k8s-master(192.168.1.161) ~]#kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    k8s-master   Ready    master   86m     v1.19.7
    k8s-node1    Ready    <none>   33m     v1.19.7
    k8s-node2    Ready    <none>   2m56s   v1.19.7
    


    Harbor 安装部署


    系统初始化

    参考 Kubernetes 系统初始化操作。


    安装docker-ce

    [root@harbor ~]# unzip kubernetes-arm64.zip
    [root@harbor ~]# cd kubernetes-arm64
    [root@harbor kubernetes-arm64]# cd docker-ce-19.03-arm64
    [root@harbor docker-ce-19.03-arm64]# yum localinstall *.rpm -y
    [root@harbor docker-ce-19.03-arm64]# cp -a docker /etc/
    [root@harbor docker-ce-19.03-arm64]# systemctl enable docker ; systemctl start docker ; docker info
    

    安装 docker-compose

    [root@harbor kubernetes-arm64]# tar xf docker-compose-arm64-1.22.tar.gz
    [root@harbor docker-compose-arm64-1.22]# cat README.md
    ## 安装 docker-compose
    yum localinstall *.rpm -y
    ## 查看版本
    docker-compose version
    

    按照 README.md 文档操作

    [root@harbor docker-compose-arm64-1.22]# yum localinstall *.rpm -y
    [root@harbor docker-compose-arm64-1.22]# docker-compose version
    docker-compose version 1.22.0, build f46880f
    docker-py version: 4.0.2
    CPython version: 3.7.4
    OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019
    


    安装部署 harbor

    [root@harbor kubernetes-arm64]# tar xf harbor-v1.9.1-arm64.tar.gz
    [root@harbor kubernetes-arm64]# cd harbor-v1.9.1-arm64
    [root@harbor harbor-v1.9.1-arm64]# cat README.md
    ## 前提条件
    1. 安装完毕 docker-ce (参考 docker-ce-19.03-arm64.tar.gz 安装)
    2. 安装完毕 docker-compose (参考 docker-compose-arm64-1.22.tar.gz 安装)
    
    ## 修改 harbor.yml 中的IP地址
    第5行 hostname 修改为本机IP
    
    5 hostname: 192.168.1.133
    
    ## 确保 prepare 有执行权限
    chmod 777 prepare
    
    
    ## 执行 install.sh 安装 harbor
    ./install.sh
    ---
    Creating network "harbor-arm64-v191_harbor" with the default driver
    Creating harbor-log ... done
    Creating registry      ... done
    Creating registryctl   ... done
    Creating harbor-portal ... done
    Creating redis         ... done
    Creating harbor-db     ... done
    Creating harbor-core   ... done
    Creating nginx             ... done
    Creating harbor-jobservice ... done
    
    ✔ ----Harbor has been installed and started successfully.----
    
    Now you should be able to visit the admin portal at http://192.168.1.133.
    For more details, please visit https://github.com/goharbor/harbor .
    ---
    
    出现如上所示表示安装成功。
    
    ## 查看 docker-compose
    docker-compose ps
          Name                     Command                       State                     Ports
    ------------------------------------------------------------------------------------------------------
    harbor-core         /harbor/harbor_core              Up (healthy)
    harbor-db           /docker-entrypoint.sh            Up (health: starting)   5432/tcp
    harbor-jobservice   /harbor/harbor_jobservice  ...   Up (health: starting)
    harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp
    harbor-portal       nginx -g daemon off;             Up (healthy)            8080/tcp
    nginx               nginx -g daemon off;             Up (health: starting)   0.0.0.0:80->8080/tcp
    redis               docker-entrypoint.sh redis ...   Up                      6379/tcp
    registry            /entrypoint.sh /etc/regist ...   Up (healthy)            5000/tcp
    registryctl         /harbor/start.sh                 Up (healthy)
    
    ## 通过浏览器访问:
    http://IP/
    

    按照 README.md 文档操作

    [root@harbor harbor-v1.9.1-arm64]# vim harbor.yml
    ...
      5 hostname: 192.168.1.164		# 第 5 行修改为自己harbor 服务器地址
    ... 
    [root@harbor harbor-v1.9.1-arm64]# ls
    common  docker-compose.yml  harbor-arm64-images-v1.9.1.tar.gz  harbor.yml  install.sh  prepare  README.md
    
    ### 执行 install.sh 脚本 ###
    [root@harbor harbor-v1.9.1-arm64]# ./install.sh
    ...
    ✔ ----Harbor has been installed and started successfully.----
    ...
    
    # 查看 docker-compose
    [root@harbor harbor-v1.9.1-arm64]# docker-compose ps
          Name                     Command                       State                     Ports
    ------------------------------------------------------------------------------------------------------
    harbor-core         /harbor/harbor_core              Up (healthy)
    harbor-db           /docker-entrypoint.sh            Up (health: starting)   5432/tcp
    harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)
    harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp
    harbor-portal       nginx -g daemon off;             Up (healthy)            8080/tcp
    nginx               nginx -g daemon off;             Up (healthy)            0.0.0.0:80->8080/tcp
    redis               docker-entrypoint.sh redis ...   Up                      6379/tcp
    registry            /entrypoint.sh /etc/regist ...   Up (healthy)            5000/tcp
    registryctl         /harbor/start.sh                 Up (healthy)
    

    通过浏览器访问 http://192.168.1.164/

    image-20210828175232320




    --- EOF ---
  • 相关阅读:
    edu_6_1_4
    edu_6_1_2
    edu_6_1_3
    edu_6_1_1
    音乐链接
    音乐推荐界面
    客服页面
    购物页面
    京东读书新闻资讯页面
    安装Tomcat时 ,设置JAVA_HOME和JRE_HOME
  • 原文地址:https://www.cnblogs.com/hukey/p/15200751.html
Copyright © 2020-2023  润新知