• sealer 自定义 k8s 镜像并部署高可用集群


    sealer 可以自定义 k8s 镜像,想把一些 dashboard 或者 helm 包管理器打入 k8s 镜像,可以直用 sealer 来自定义。

    sealer 部署的 k8s 高可用集群自带负载均衡。sealer 的集群高可用使用了轻量级的负载均衡 lvscare。相比其它负载均衡,lvscare 非常小仅有几百行代码,而且 lvscare 只做 ipvs 规则的守护,本身不做负载非常稳定,直接在 node 上监听 apiserver,如果跪了就移除对应的规则,重新起来之后会自动加回,相当于是一个专用的负载均衡器。

    部署方式和 sealor 一样指定 master 节点和 node 节点。

    1、下载并部署 sealer

    Download and install sealer. Sealer is a binary tool of golang. You can download and unzip it directly to the bin directory, and the release page can also be downloaded
    wget -c https://sealer.oss-cn-beijing.aliyuncs.com/sealer-latest.tar.gz && \
        tar -xvf sealer-latest.tar.gz -C /usr/bin
    

    2、创建 kubefile 文件并编辑 (自定义构建 k8s 镜像需要会 dockerfile 的镜像命令)

    touch kubefile
    
    FROM registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes:v1.22.5  # k8s 版本自定义
    COPY helm /usr/bin                                                  # 这个 helm 已经解压并把 helm 移动到 /root/ 目录下了
    COPY kube-prometheus-0.11.0 .                                       # 这个版本可能不一样,解压出来的文件夹名字也不一样
    COPY loki-stack-2.1.2.tgz .                                         # loki 这个可以用 helm 下载安装包
    CMD kubectl apply -f kube-prometheus-0.11.0/manifests/setup
    CMD kubectl apply -f kube-prometheus-0.11.0/manifests
    CMD helm install loki loki-stack-2.1.2.tgz -n monitoring
    

    3、构建

    sealer build -t k8s:v1.22.5 .   # 构建镜像名字可以根据自己情况定义
    2022-06-29 23:02:41 [INFO] [executor.go:123] start to check the middleware file
    2022-06-29 23:02:41 [INFO] [executor.go:63] run build layer: COPY helm /usr/bin
    2022-06-29 23:02:42 [INFO] [executor.go:63] run build layer: COPY kube-prometheus-0.11.0 .
    2022-06-29 23:02:42 [INFO] [executor.go:63] run build layer: COPY loki-stack-2.1.2.tgz .
    2022-06-29 23:02:42 [INFO] [executor.go:95] exec all build instructs success
    2022-06-29 23:02:42 [WARN] [executor.go:112] no rootfs diff content found
    2022-06-29 23:02:42 [INFO] [build.go:100] build image amd64 k8s:v1.22.5 success
    

    4、查看镜像并部署

    [root@master3 ~]# sealer images 
    +---------------------------+------------------------------------------------------------------+-------+---------+---------------------+-----------+
    |        IMAGE NAME         |                             IMAGE ID                             | ARCH  | VARIANT |       CREATE        |   SIZE    |
    +---------------------------+------------------------------------------------------------------+-------+---------+---------------------+-----------+
    | k8s:v1.22.5               | ef293898df6f5a9a01bd5bc5708820ef9ff25acfe56ea20cfe3a45a725f59bb5 | amd64 |         | 2022-06-29 23:02:42 | 1004.76MB |
    | kubernetes:v1.22.5        | 46f8c423be130a508116f41cda013502094804525c1274bc84296b674fe17618 | amd64 |         | 2022-06-29 23:02:42 | 956.60MB  |
    +---------------------------+------------------------------------------------------------------+-------+---------+---------------------+-----------+
    
    
    sealer run k8s:v1.22.5 --masters 192.168.200.3,192.168.200.4,192.168.200.5 \
        --nodes 192.168.200.6 \
        --user root \
        --passwd admin
    

    5、查看节点状态

    [root@master3 ~]# kubectl get nodes
    NAME      STATUS   ROLES    AGE   VERSION
    master3   Ready    master   40h   v1.22.5
    master4   Ready    master   40h   v1.22.5
    master5   Ready    master   40h   v1.22.5
    node6     Ready    <none>   40h   v1.22.5
    

    查看所有 pod 状态

    [root@master3 ~]# kubectl get po --all-namespaces -o wide 
    NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
    calico-apiserver   calico-apiserver-57447598b7-46kbs          1/1     Running   5          40h   100.84.137.67    master5   <none>           <none>
    calico-apiserver   calico-apiserver-57447598b7-hlws5          1/1     Running   2          40h   100.68.136.4     master3   <none>           <none>
    calico-system      calico-kube-controllers-69dfd59986-tb496   1/1     Running   6          40h   100.125.38.209   node6     <none>           <none>
    calico-system      calico-node-69dld                          1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
    calico-system      calico-node-kdjzs                          1/1     Running   4          40h   192.168.200.3    master3   <none>           <none>
    calico-system      calico-node-prktp                          1/1     Running   1          40h   192.168.200.5    master5   <none>           <none>
    calico-system      calico-node-tm285                          1/1     Running   3          40h   192.168.200.4    master4   <none>           <none>
    calico-system      calico-typha-779b5cfd4c-x4wnz              1/1     Running   2          40h   192.168.200.4    master4   <none>           <none>
    calico-system      calico-typha-779b5cfd4c-zftxs              1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
    kube-system        coredns-55bcc669d7-pvj45                   1/1     Running   2          40h   100.68.136.3     master3   <none>           <none>
    kube-system        coredns-55bcc669d7-xkwvm                   1/1     Running   1          40h   100.84.137.68    master5   <none>           <none>
    kube-system        etcd-master3                               1/1     Running   2          40h   192.168.200.3    master3   <none>           <none>
    kube-system        etcd-master4                               1/1     Running   2          40h   192.168.200.4    master4   <none>           <none>
    kube-system        etcd-master5                               1/1     Running   2          40h   192.168.200.5    master5   <none>           <none>
    kube-system        kube-apiserver-master3                     1/1     Running   3          40h   192.168.200.3    master3   <none>           <none>
    kube-system        kube-apiserver-master4                     1/1     Running   2          40h   192.168.200.4    master4   <none>           <none>
    kube-system        kube-apiserver-master5                     1/1     Running   3          40h   192.168.200.5    master5   <none>           <none>
    kube-system        kube-controller-manager-master3            1/1     Running   3          40h   192.168.200.3    master3   <none>           <none>
    kube-system        kube-controller-manager-master4            1/1     Running   9          40h   192.168.200.4    master4   <none>           <none>
    kube-system        kube-controller-manager-master5            1/1     Running   8          40h   192.168.200.5    master5   <none>           <none>
    kube-system        kube-lvscare-node6                         1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
    kube-system        kube-proxy-99cjb                           1/1     Running   1          40h   192.168.200.3    master3   <none>           <none>
    kube-system        kube-proxy-lmdn6                           1/1     Running   1          40h   192.168.200.4    master4   <none>           <none>
    kube-system        kube-proxy-ns9c5                           1/1     Running   1          40h   192.168.200.5    master5   <none>           <none>
    kube-system        kube-proxy-xf6fx                           1/1     Running   1          40h   192.168.200.6    node6     <none>           <none>
    kube-system        kube-scheduler-master3                     1/1     Running   4          40h   192.168.200.3    master3   <none>           <none>
    kube-system        kube-scheduler-master4                     1/1     Running   5          40h   192.168.200.4    master4   <none>           <none>
    kube-system        kube-scheduler-master5                     1/1     Running   7          40h   192.168.200.5    master5   <none>           <none>
    monitoring         alertmanager-main-0                        2/2     Running   2          40h   100.125.38.210   node6     <none>           <none>
    monitoring         alertmanager-main-1                        0/2     Pending   0          40h   <none>           <none>    <none>           <none>
    monitoring         alertmanager-main-2                        0/2     Pending   0          40h   <none>           <none>    <none>           <none>
    monitoring         blackbox-exporter-5c545d55d6-c8997         3/3     Running   3          40h   100.125.38.203   node6     <none>           <none>
    monitoring         grafana-785db9984-xhrwx                    1/1     Running   1          40h   100.125.38.204   node6     <none>           <none>
    monitoring         kube-state-metrics-54bd6b479c-jvt76        3/3     Running   3          40h   100.125.38.202   node6     <none>           <none>
    monitoring         node-exporter-5hl54                        2/2     Running   2          40h   192.168.200.4    master4   <none>           <none>
    monitoring         node-exporter-89jbp                        2/2     Running   2          40h   192.168.200.3    master3   <none>           <none>
    monitoring         node-exporter-mqm4n                        2/2     Running   2          40h   192.168.200.6    node6     <none>           <none>
    monitoring         node-exporter-mx6qr                        2/2     Running   2          40h   192.168.200.5    master5   <none>           <none>
    monitoring         prometheus-adapter-7dbf69cc-65hp9          1/1     Running   1          40h   100.125.38.205   node6     <none>           <none>
    monitoring         prometheus-adapter-7dbf69cc-xnjv4          1/1     Running   1          40h   100.125.38.207   node6     <none>           <none>
    monitoring         prometheus-k8s-0                           2/2     Running   2          40h   100.125.38.208   node6     <none>           <none>
    monitoring         prometheus-k8s-1                           0/2     Pending   0          40h   <none>           <none>    <none>           <none>
    monitoring         prometheus-operator-54dd69bbf6-h5szm       2/2     Running   2          40h   100.125.38.206   node6     <none>           <none>
    tigera-operator    tigera-operator-7cdb76dd8b-hfhrt           1/1     Running   9          40h   192.168.200.6    node6     <none>           <none>
    
  • 相关阅读:
    Object C学习笔记25-文件管理(一)
    实施项目--为什么开发人员一直在抱怨需求变动
    Git.Framework 框架随手记--准备工作
    一网打尽!2018网络安全事件最全的盘点
    林纳斯·托瓦兹和Linux行为准则:揭穿7个谬论
    LinkedList源码解析
    四种List实现类的对比总结
    HashMap源码解析
    volatile
    Java内存模型与共享变量可见性
  • 原文地址:https://www.cnblogs.com/lfl17718347843/p/16426180.html
Copyright © 2020-2023  润新知