• helm 安装 EFK ES Xpack认证


    资源清单

    本文安装 EFK 依赖 K8S集群helm ,本文不提供 K8S集群helm 安装方式

    使用此文档部署,需要自行解决 storageClass 问题 ( NFS, ceph, openebs等 )

    软件 版本
    chart 7.16.3
    elasticsearch 7.16.3
    filebeat 7.16.3
    kibana 7.16.3
    kubernetes v1.19.3
    helm v3.8.1

    helm 安装 efk 集群

    1. 添加 elastic 的仓库

    $ helm repo add elastic https://helm.elastic.co
    

    2. 查询 efk 资源

    $ helm repo update
    $ helm search repo elasticsearch
    NAME                                	CHART VERSION	APP VERSION	DESCRIPTION                                       
    elastic/elasticsearch               	7.17.3       	7.17.3     	Official Elastic helm chart for Elasticsearch                    
    
    $ helm search repo filebeat
    NAME            	CHART VERSION	APP VERSION	DESCRIPTION                             
    elastic/filebeat	7.17.3       	7.17.3     	Official Elastic helm chart for Filebeat
    
    $ helm search repo kibana
    NAME                     	CHART VERSION	APP VERSION	DESCRIPTION                                       
    elastic/kibana           	7.17.3       	7.17.3     	Official Elastic helm chart for Kibana 
    
    

    helm 安装 elasticsearch 集群 启用 Xpack 认证

    1. 创建集群证书

    $ mkdir -p /root/elk/es/certs && cd /root/elk/es/certs
    
    # 运行容器生成证书
    docker run --name elastic-charts-certs -i -w /app elasticsearch:7.16.3 /bin/sh -c  \
      "elasticsearch-certutil ca --out /app/elastic-stack-ca.p12 --pass '' && \
        elasticsearch-certutil cert --ca /app/elastic-stack-ca.p12 --pass '' --ca-pass '' --out /app/elastic-certificates.p12"
    
    # 从容器中将生成的证书拷贝出来
    docker cp elastic-charts-certs:/app/elastic-certificates.p12 ./ 
    
    # 删除容器
    docker rm -f elastic-charts-certs
    
    # 将 pcks12 中的信息分离出来,写入文件
    openssl pkcs12 -nodes -passin pass:'' -in elastic-certificates.p12 -out elastic-certificate.pem
    

    2. 添加证书到集群

    $ cd /root/elk/es/certs
    
    # 创建 test-middleware 名称空间
    $ kubectl create ns test-middleware
    
    # 添加证书
    $ kubectl -n test-middleware create secret generic elastic-certificates --from-file=elastic-certificates.p12
    
    $ kubectl -n test-middleware create secret generic elastic-certificate-pem --from-file=elastic-certificate.pem
    
    # 设置集群用户名密码,用户名不建议修改 ( es 和 kibana 都会使用此账户密码信息 )
    $ kubectl -n test-middleware create secret generic elastic-credentials \
      --from-literal=username=elastic --from-literal=password=admin@123
     
     # 查看创建的 secret
    $ kubectl -n test-middleware get secret
    NAME                                          TYPE                                  DATA   AGE
    elastic-certificate-pem                       Opaque                                1      11m
    elastic-certificates                          Opaque                                1      11m
    elastic-credentials                           Opaque                                2      11m
    
    

    3. 拉取 elasticsearch chart 到本地

    $ cd /root/elk/es/
    
    # 拉取 chart 到本地 /root/elk/es 目录
    $ helm pull elastic/elasticsearch --version 7.16.3
    
    $ tar -zxvf elasticsearch-7.16.3.tgz
    $ cp elasticsearch/values.yaml ./values-test.yaml
    
    # 查看当前目录层级
    $ tree -L 2
    .
    ├── certs
    │   ├── elastic-certificate.pem
    │   └── elastic-certificates.p12
    ├── elasticsearch
    │   ├── Chart.yaml
    │   ├── examples
    │   ├── Makefile
    │   ├── README.md
    │   ├── templates
    │   └── values.yaml
    ├── elasticsearch-7.16.3.tgz
    └── values-test.yaml
    

    4. 对本地 values-test.yaml 修改

    • 查看集群 storageclasses
    $ kubectl get storageclasses.storage.k8s.io 
    NAME                   PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    openebs-jiva-default   jiva.csi.openebs.io   Delete          Immediate              true                   33d
    
    • 修改配置
    $ cat values-test.yaml 
    
    ## 配置文件中定义 storageClassName: "",会使用集群配置的 openebs 提供的 storageClass,
    ## 使用此文档部署,需要自行解决 storageClass 问题 (ceph, nfs, 公有云提供的 nfs)
    
    ---
    clusterName: "elasticsearch"
    nodeGroup: "master"
    
    
    roles:
      master: "true"
      ingest: "true"
      data: "true"
      remote_cluster_client: "true"
      ml: "true"
    
    
    # 定义集群数量
    replicas: 3
    minimumMasterNodes: 2
    
    
    # 启用 ES 的 xpack 认证,认证文件为上述定义 elastic-certificate secret
    esConfig:
      elasticsearch.yml: |
        xpack.security.enabled: true
        xpack.security.transport.ssl.enabled: true
        xpack.security.transport.ssl.verification_mode: certificate
        xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
        xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12 
    
    
    # 定义 ES 集群认证的账户密码信息,引用上述定义的 elastic-credentials secret
    extraEnvs:
      - name: ELASTIC_USERNAME
        valueFrom:
          secretKeyRef:
            name: elastic-credentials
            key: username
      - name: ELASTIC_PASSWORD
        valueFrom:
          secretKeyRef:
            name: elastic-credentials
            key: password
    
    
    # 挂载上述定义 elastic-certificate secret 到 POD,xpack 定义使用
    secretMounts:
      - name: elastic-certificates
        secretName: elastic-certificates
        path: /usr/share/elasticsearch/config/certs
        defaultMode: 0755
    
    
    image: "elasticsearch"
    imageTag: "7.16.3"
    imagePullPolicy: "IfNotPresent"
    
    
    esJavaOpts: "" # example: "-Xmx1g -Xms1g"
    
    
    volumeClaimTemplate:
      # 定义 storageClass 使用的类型
      storageClassName: "openebs-jiva-default"
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 30Gi
    
    
    # 启用持久化存储
    persistence:
      enabled: true
      labels:
        # Add default labels for the volumeClaimTemplate of the StatefulSet
        enabled: false
      annotations: {}
    
    
    service:
      enabled: true
      labels: {}
      labelsHeadless: {}
      type: ClusterIP
      nodePort: ""
      annotations: {}
      httpPortName: http
      transportPortName: transport
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      externalTrafficPolicy: ""
    
    
    nodeSelector: {}
    
    
    ingress:
      enabled: false
      annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
      className: "nginx"
      pathtype: ImplementationSpecific
      hosts:
        - host: chart-example.local
          paths:
            - path: /
      tls: []
    

    5. 安装 elasticsearch 集群

    # 安装 elasticsearch 集群
    $ helm -n test-middleware install elasticsearch-cluster elasticsearch -f values-test.yaml
    
    ## helm -n NAMESAPCE install SERVER_NAME FILE_NAME -f CONFIG_FILE
    -n 指定 kubernetes 集群名称空间
    -f 指定使用的配置文件,文件中定义的配置可以覆盖 elasticsearch/values.yaml 文件中配置
    
    
    NAME: elasticsearch-cluster
    LAST DEPLOYED: Thu Jun  2 22:33:31 2022
    NAMESPACE: test-middleware
    STATUS: deployed
    REVISION: 1
    NOTES:
    1. Watch all cluster members come up.
      $ kubectl get pods --namespace=test-middleware -l app=elasticsearch-master -w
    2. Test cluster health using Helm test.
      $ helm --namespace=test-middleware test elasticsearch-cluster
    

    6. 查看部署的 elasticsearch 集群

    $ helm -n test-middleware list
    NAME                 	NAMESPACE      	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
    elasticsearch-cluster	test-middleware	1       	2022-06-02 22:57:55.112229514 -0400 EDT	deployed	elasticsearch-7.16.3	7.16.3
    
    $ kubectl get pods --namespace=test-middleware -l app=elasticsearch-master
    NAME                     READY   STATUS    RESTARTS   AGE
    elasticsearch-master-0   1/1     Running   0          8m57s
    elasticsearch-master-1   1/1     Running   0          8m35s
    elasticsearch-master-2   1/1     Running   0          8m35s
    
    • 查看服务使用的 storageclass
    # 查看 pvc
    $ kubectl -n test-middleware get pvc
    NAME                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    elasticsearch-master-elasticsearch-master-0   Bound    pvc-778c6765-59cf-4b70-9029-29d216fb4648   30Gi       RWO            openebs-jiva-default   11m
    elasticsearch-master-elasticsearch-master-1   Bound    pvc-40e32e2a-049a-4b79-bfb8-be75dd00f517   30Gi       RWO            openebs-jiva-default   11m
    elasticsearch-master-elasticsearch-master-2   Bound    pvc-a71bd761-66bb-482a-abfd-93fd6164088a   30Gi       RWO            openebs-jiva-default   11m
    
    # 查看 pv
    $ kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                 STORAGECLASS           REASON   AGE
    pvc-40e32e2a-049a-4b79-bfb8-be75dd00f517   30Gi       RWO            Delete           Bound    test-middleware/elasticsearch-master-elasticsearch-master-1           openebs-jiva-default            11m
    pvc-778c6765-59cf-4b70-9029-29d216fb4648   30Gi       RWO            Delete           Bound    test-middleware/elasticsearch-master-elasticsearch-master-0           openebs-jiva-default            11m
    pvc-a71bd761-66bb-482a-abfd-93fd6164088a   30Gi       RWO            Delete           Bound    test-middleware/elasticsearch-master-elasticsearch-master-2           openebs-jiva-default            11m
    

    7. 连接 elasticsearch 集群 验证服务

    $ kubectl -n test-middleware exec -it elasticsearch-master-0 -- bash
    
    elasticsearch@elasticsearch-master-0:~$ curl -u elastic:admin@123 localhost:9200/_cluster/health?pretty
    {
      "cluster_name" : "elasticsearch",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 3,
      "number_of_data_nodes" : 3,
      "active_primary_shards" : 1,
      "active_shards" : 2,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 100.0
    }
    
    elasticsearch@elasticsearch-master-0:~$ curl -u elastic:admin@123 localhost:9200/_cat/nodes            
    10.244.3.131 17 65  7 0.13 0.28 0.48 cdfhilmrstw - elasticsearch-master-0
    10.244.2.50  53 69 10 0.04 0.18 0.35 cdfhilmrstw - elasticsearch-master-1
    10.244.1.71  59 71  9 0.00 0.20 0.49 cdfhilmrstw * elasticsearch-master-2
    

    helm 安装 filebeat

    1. 拉取 filebeat chart 到本地

    $ mkdir -p /root/elk/filebeat/ && cd /root/elk/filebeat/
    
    # 拉取 chart 到本地 /root/elk/filebeat 目录
    $ helm pull elastic/filebeat --version 7.16.3
    
    $ tar -zxvf filebeat-7.16.3.tgz
    $ cp filebeat/values.yaml values-test.yaml
    
    # 查看当前目录层级
    $ tree -L 2
    .
    ├── filebeat
    │   ├── Chart.yaml
    │   ├── examples
    │   ├── Makefile
    │   ├── README.md
    │   ├── templates
    │   └── values.yaml
    ├── filebeat-7.16.3.tgz
    └── values-test.yaml
    

    2. 对本地 values-test.yaml 修改

    • 修改配置
    $ cat values-test.yaml 
    
    ---
    daemonset:
      enabled: false
    
    
    deployment:
      # Include the daemonset
      enabled: true
      filebeatConfig:
        filebeat.yml: |
          filebeat.inputs:
          - type: container
            paths:
              - /var/log/containers/*.log
            processors:
            - add_kubernetes_metadata:
                host: ${NODE_NAME}
                matchers:
                - logs_path:
                    logs_path: "/var/log/containers/"
    
          output.elasticsearch:
            host: '${NODE_NAME}'
            hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
    
      nodeSelector: {}
    
      resources:
        requests:
          cpu: "100m"
          memory: "100Mi"
        limits:
          cpu: "1000m"
          memory: "200Mi"
    
    
    image: "docker.elastic.co/beats/filebeat"
    imageTag: "7.16.3"
    imagePullPolicy: "IfNotPresent"
    imagePullSecrets: []
    
    replicas: 1
    

    3. 安装 filebeat

    # 安装 filebeat
    $ helm -n test-middleware install filebeat filebeat -f values-test.yaml
    
    ## helm -n NAMESAPCE install SERVER_NAME FILE_NAME -f CONFIG_FILE
    -n 指定 kubernetes 集群名称空间
    -f 指定使用的配置文件,文件中定义的配置可以覆盖 filebeat/values.yaml 文件中配置
    
    
    NAME: filebeat
    LAST DEPLOYED: Fri Jun  3 01:50:52 2022
    NAMESPACE: test-middleware
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    1. Watch all containers come up.
      $ kubectl get pods --namespace=test-middleware -l app=filebeat-filebeat -w
    

    4. 查看部署的 filebeat

    $ helm -n test-middleware list
    NAME                 	NAMESPACE      	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
    filebeat             	test-middleware	1       	2022-06-03 07:20:19.380955035 -0400 EDT	deployed	filebeat-7.16.3     	7.16.3 
    
    $ kubectl get pods --namespace=test-middleware -l app=filebeat-filebeat
    NAME                                 READY   STATUS             RESTARTS   AGE
    filebeat-filebeat-5b67d8479b-zlprk   1/1     Running            0          4m58s
    
    

    helm 安装 kibana

    1. 拉取 kibana chart 到本地

    $ mkdir -p /root/elk/kibana/ && cd /root/elk/kibana/
    
    # 拉取 chart 到本地 /root/elk/kibana 目录
    $ helm pull elastic/kibana --version 7.16.3
    
    $ tar -zxvf kibana-7.16.3.tgz
    $ cp kibana/values.yaml values-test.yaml
    
    # 查看当前目录层级
    $ tree -L 2
    .
    ├── kibana
    │   ├── Chart.yaml
    │   ├── examples
    │   ├── Makefile
    │   ├── README.md
    │   ├── templates
    │   └── values.yaml
    ├── kibana-7.16.3.tgz
    └── values-test.yaml
    

    2. 对本地 values-test.yaml 修改

    • 修改配置
    $ cat values-test.yaml 
    
    ---
    elasticsearchHosts: "http://elasticsearch-master:9200"
    
    
    replicas: 1
    
    
    extraEnvs:
      - name: ELASTICSEARCH_USERNAME
        valueFrom:
          secretKeyRef:
            name: elastic-credentials
            key: username
      - name: ELASTICSEARCH_PASSWORD
        valueFrom:
          secretKeyRef:
            name: elastic-credentials
            key: password
      - name: "NODE_OPTIONS"
        value: "--max-old-space-size=1800"
    
    
    image: "kibana"
    imageTag: "7.16.3"
    imagePullPolicy: "IfNotPresent"
    nodeSelector: {}
    
    
    kibanaConfig:
      kibana.yml: |
        i18n.locale: "zh-CN"
    
    
    service:
      type: ClusterIP
      loadBalancerIP: ""
      port: 5601
      nodePort: ""
      labels: {}
      annotations:
        {}
        # cloud.google.com/load-balancer-type: "Internal"
        # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
        # service.beta.kubernetes.io/azure-load-balancer-internal: "true"
        # service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
        # service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
      loadBalancerSourceRanges:
        []
        # 0.0.0.0/0
      httpPortName: http
    
    
    ingress:
      enabled: true
      className: "nginx"
      pathtype: ImplementationSpecific
      annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
      hosts:
        - host: kibana.evescn.com
          paths:
            - path: /
    

    3. 安装 kibana

    # 安装 kibana
    $ helm -n test-middleware install kibana kibana -f values-test.yaml
    
    ## helm -n NAMESAPCE install SERVER_NAME FILE_NAME -f CONFIG_FILE
    -n 指定 kubernetes 集群名称空间
    -f 指定使用的配置文件,文件中定义的配置可以覆盖 kibana/values.yaml 文件中配置
    
    
    NAME: kibana
    LAST DEPLOYED: Fri Jun  3 02:21:20 2022
    NAMESPACE: test-middleware
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    

    4. 查看部署的 kibana

    $ helm -n test-middleware list
    NAME                 	NAMESPACE      	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
    kibana               	test-middleware	1       	2022-06-03 02:21:20.846637173 -0400 EDT	deployed	kibana-7.16.3       	7.16.3 
    
    $ kubectl get pod --namespace=test-middleware -l app=kibana
    NAME                               READY   STATUS              RESTARTS   AGE
    kibana-kibana-d4fcd8979-nldsw      1/1     Running             0          2m15s
    

    参考文档

    https://bbs.huaweicloud.com/blogs/303085
    https://www.cnblogs.com/aresxin/p/helm-es6.html
    
    https://artifacthub.io/packages/helm/elastic/elasticsearch/7.16.3
    https://artifacthub.io/packages/helm/elastic/filebeat/7.16.3
    https://artifacthub.io/packages/helm/elastic/kibana/7.16.3
    
  • 相关阅读:
    牛客算法周周练18A
    洛谷P2580
    Codeforces 617E
    SPOJ 3267
    Codeforces Round #661 (Div. 3) 解题报告(ABCD)
    Codeforces 1399D
    Codeforces 1399C
    Codeforces 1399B
    Codeforces 1399A
    牛客算法周周练18 解题报告(ABCE)
  • 原文地址:https://www.cnblogs.com/evescn/p/16340453.html
Copyright © 2020-2023  润新知