• k8s zookeeper安装(集群版与非集群版)


    集群版zookeeper安装

    第一步:添加helm镜像源

    helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
    

    第二步:下载Zookeeper

    helm fetch incubator/zookeeper
    

    第三步:修改

    ...
    persistence:
      enabled: true
      ## zookeeper data Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      storageClass: "nfs-client"
      accessMode: ReadWriteOnce
      size: 5Gi
    ...
    

    注意:

    1、如果已有存储,可不执行以下操作,将现有的storageClass替换即可

    查看storageclass,替换对应的NAME

    kubectl get sc -(namespace名称)

    [root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
    NAME       PROVISION                                          RECLAIMPOLICY VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete        Immediate           true                   16d
    
    
    2、如果没有存储,执行下列操作时,注意存储的方式及地址

    修改存储(storageclass 名称为kubectl get sc -(namespace名称) 下面的共享存储卷,如果没有按照以下步骤安装)

    1、集群版本:如果是1.19+
    # xxx填写存储地址,例如nfs共享存储填写ip:192.168.8.158
    helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner
    

    如果出现错误

    Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
    
    2、如果是1.19版本以下执行yaml文件
    $ kubectl create -f nfs-client-sa.yaml
    $ kubectl create -f nfs-client-class.yaml
    $ kubectl create -f nfs-client.yaml
    

    注意nfs-client.yaml存储地址!!!

    nfs-client-sa.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
    
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["list", "watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    

    nfs-client-class.yaml

     apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: course-nfs-storage
    provisioner: fuseim.pri/ifs
    

    nfs-client.yaml

    spec.containers.env.name:NFS_SERVER 对应的value地址根据实际需求更换,以下192.168.8.158地址为示例地址

    kind: Deployment
    apiVersion: apps/v1 
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nfs-client-provisioner
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 192.168.8.158
                - name: NFS_PATH
                  value: /data/k8s
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.8.158
                path: /data/k8s
    

    非集群版zookeeper安装

    注意:zookeeper.yaml中存储地址,根据实际情况修改存储(共有三处PV需要修改)

    kubectl apply -f zookeeper.yaml -n xxxxx
    

    zookeeper.yaml

    ##创建Service
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: zookeeper
     labels:
      name: zookeeper
    spec:
     type: NodePort
     ports:
     - port: 2181
       protocol: TCP
       targetPort: 2181
       name: zookeeper-2181
       nodePort: 30000
     - port: 2888
       protocol: TCP
       targetPort: 2888
       name: zookeeper-2888
     - port: 3888
       protocol: TCP
       targetPort: 3888
       name: zookeeper-3888
     selector:
       name: zookeeper
    ---
    
    ##创建PV
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: zookeeper-data-pv
     labels:
       pv: zookeeper-data-pv
    
    spec:
     capacity:
       storage: 10Gi
     accessModes:
       - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
     #########################################################注意pv的nfs存储地址,根据实际情况修改##################
     nfs:            #NFS设置
       server: 192.168.8.158
       path: /data/k8s
    ##创建pvc
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: zookeeper-data-pvc
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: 10Gi
     selector:
       matchLabels:
         pv: zookeeper-data-pv
    ---
    ##创建PV
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: zookeeper-datalog-pv
     labels:
       pv: zookeeper-datalog-pv
    
    spec:
     capacity:
       storage: 10Gi
     accessModes:
       - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
     #########################################################注意pv的nfs存储地址,根据实际情况修改##################
     nfs:            #NFS设置
       server: 192.168.8.158
       path: /data/k8s
    ##创建pvc
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: zookeeper-datalog-pvc
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: 10Gi
     selector:
       matchLabels:
         pv: zookeeper-datalog-pv
    
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: zookeeper-logs-pv
     labels:
       pv: zookeeper-logs-pv
    
    spec:
     capacity:
       storage: 10Gi
     accessModes:
       - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
     #########################################################注意pv的nfs存储地址,根据实际情况修改##################
     nfs:
       server: 192.168.8.158
       path: /data/k8s
    
    ##创建pvc
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: zookeeper-logs-pvc
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: 10Gi
     selector:
       matchLabels:
         pv: zookeeper-logs-pv
    
    ---
    ## 部署
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: zookeeper
     labels:
       name: zookeeper
    spec:
     replicas: 1
     selector:
        matchLabels:
            name: zookeeper
     template:
       metadata:
         labels:
          name: zookeeper
       spec:
         containers:
         - name: zookeeper
           image: zookeeper:3.4.13
           imagePullPolicy: Always
           volumeMounts:
           - mountPath: /logs
             name: zookeeper-logs
           - mountPath: /data
             name: zookeeper-data
           - mountPath: /datalog
             name: zookeeper-datalog
           ports:
           - containerPort: 2181
           - containerPort: 2888
           - containerPort: 3888
         volumes:
         - name: zookeeper-logs
           persistentVolumeClaim:
             claimName: zookeeper-logs-pvc
         - name: zookeeper-data
           persistentVolumeClaim:
             claimName: zookeeper-data-pvc
         - name: zookeeper-datalog
           persistentVolumeClaim:
             claimName: zookeeper-datalog-pvc
    
    

    安装nimbus

    第一步:安装nimbus配置文件config map

    注意:nimbus-cm.yaml中的zookeeper为zookeeper的service名称

    kubectl apply -fnimbus-cm.yaml -n xxxxxx
    

    nimbus-cm.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nimbus-cm
    data:
      storm.yaml: |
        # DataSource
        storm.zookeeper.servers: [zookeeper]
        nimbus.seeds: [nimbus]
        storm.log.dir: "/logs"
        storm.local.dir: "/data"
    
    
    第二步:安装Deployment
    kubectl apply -f nimbus.yaml -n xxxxxx
    

    nimbus.yaml

    注意创建PV时,存储地址,根据实际情况修改

    ##创建Service
    apiVersion: v1
    kind: Service
    metadata:
     name: nimbus
     labels: 
      name: nimbus
    spec:
     ports:
     - port: 6627
       protocol: TCP
       targetPort: 6627
       name: nimbus-6627
     selector:
       name: storm-nimbus
    ---
    ##创建PV,注意修改nfs存储地址,根据实际情况调整
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: storm-nimbus-data-pv
     labels:
       pv: storm-nimbus-data-pv
    spec:
     capacity:
       storage: 5Gi
     accessModes:
       - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
     nfs:
       server: 192.168.8.158
       path: /data/k8s
    
    ##创建pvc
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: storm-nimbus-data-pvc
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: 5Gi
     selector:
       matchLabels:
         pv: storm-nimbus-data-pv
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: storm-nimbus-logs-pv
     labels:
       pv: storm-nimbus-logs-pv
    spec:
     capacity:
       storage: 5Gi
     accessModes:
       - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
     nfs:
       server: 192.168.8.158
       path: /data/k8s
    ##创建pvc
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: storm-nimbus-logs-pvc
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: 5Gi
     selector:
       matchLabels:
         pv: storm-nimbus-logs-pv
    ---
    ## 部署
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: storm-nimbus
     labels:
       name: storm-nimbus
    spec:
     replicas: 1
     selector:
        matchLabels:
            name: storm-nimbus
     template:
       metadata:
         labels: 
          name: storm-nimbus
       spec:
         hostname: nimbus
         imagePullSecrets:
         - name: e6-aliyun-image
         containers:
         - name: storm-nimbus
           image: storm:1.2.2
           imagePullPolicy: Always
           command:
           - storm
           - nimbus
           #args:
           #- nimbus
           volumeMounts:
           - mountPath: /conf/
             name: configmap-volume
           - mountPath: /logs
             name: storm-nimbus-logs
           - mountPath: /data
             name: storm-nimbus-data
           ports:
           - containerPort: 6627
         volumes:
         - name: storm-nimbus-logs
           persistentVolumeClaim:
             claimName: storm-nimbus-logs-pvc
         - name: storm-nimbus-data
           persistentVolumeClaim:
             claimName: storm-nimbus-data-pvc
         - name: configmap-volume
           configMap:
             name: nimbus-cm
    #     hostNetwork: true
    #     dnsPolicy: ClusterFirstWithHostNet    
    

    安装nimbus-ui

    kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
    
    第二步:安装svc
    kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
    
    第三步:创建config map

    安装zk-ui

    安装方式
    kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
    
    配置文件

    zookeeper-program-ui.yaml

    ##创建Service
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: zookeeper-ui
     labels: 
      name: zookeeper-ui
    spec:
     type: NodePort
     ports:
     - port: 9090
       protocol: TCP
       targetPort: 9090
       name: zookeeper-ui-8080
       nodePort: 30012
     selector:
       name: zookeeper-ui
    
    ---
    ## 部署
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: zookeeper-ui
     labels:
       name: zookeeper-ui
    spec:
     replicas: 1
     selector:
        matchLabels:
            name: zookeeper-ui
     template:
       metadata:
         labels: 
          name: zookeeper-ui
       spec:
         containers:
         - name: zookeeper-ui
           image: maauso/zkui
           imagePullPolicy: Always
           env:
           - name: ZKLIST
             value: 192.168.8.158:30000
           ports:
           - containerPort: 9090
    
  • 相关阅读:
    构造TreeView
    vs2017和Xamarin
    最可能的原因使用的托管的处理程序,但是未安装或未完整安装asp.net
    网站搭建(二)
    网站搭建(一)
    .asp 和 .aspx
    第一天
    IMU的预计分算法
    VINS-MONO初始化
    VINS-MONO ProjectionFactor代码分析及公式推导
  • 原文地址:https://www.cnblogs.com/lanheader/p/14153860.html
Copyright © 2020-2023  润新知