• KafkaEagle监控


    Kafka-Eagle框架可以监控Kafka集群的整体运行情况,在生产环境中经常使用。

    1、MySQL环境准备

    Kafka-Eagle的安装依赖于MySQL,MySQL主要用来存储可视化展示的数据。

    链接:https://pan.baidu.com/s/1fRHTwUgJciAT8g8IZhdrFQ
    提取码:rn0z

    这里是 mysql 安装文件,有需要的可以自取,然后进行安装。

    2、Kafka环境准备

    1、关闭Kafka集群

    [hui@hadoop103 ~]$ kk.sh stop
    ---- stop hadoop103 kafka ---
    ---- stop hadoop104 kafka ---
    ---- stop hadoop105 kafka ---

    2、修改/opt/module/kafka/bin/kafka-server-start.sh命令中

    将内容

    if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
        export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
    fi

    修改为

    if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
        export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
        export JMX_PORT="9999"
        #export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
    fi

    注意:修改之后在启动Kafka之前要分发之其他节点

    [hui@hadoop103 bin]$ sxync.sh kafka-server-start.sh
    fname=kafka-server-start.sh
    pdir=/opt/module/kafka/bin
    ------------------- hadoop104 --------------
    sending incremental file list
    kafka-server-start.sh
    
    sent 975 bytes  received 43 bytes  2036.00 bytes/sec
    total size is 1584  speedup is 1.56
    ------------------- hadoop105 --------------
    sending incremental file list
    kafka-server-start.sh
    
    sent 975 bytes  received 43 bytes  2036.00 bytes/sec
    total size is 1584  speedup is 1.56

    3、Kafka-Eagle安装

    官网:https://www.kafka-eagle.org/

    解压安装

    [hui@hadoop103 software]$ tar -zxvf kafka-eagle-bin-2.0.8.tar.gz 
    kafka-eagle-bin-2.0.8/
    kafka-eagle-bin-2.0.8/efak-web-2.0.8-bin.tar.gz
    [hui@hadoop103 software]$ ll
    总用量 811296
    -rw-r--r-- 1 hui wd  51776645 12月 17 2020 canal.deployer-1.1.4.tar.gz
    -rw-r--r-- 1 hui wd 333549393 3月   7 2021 flink-1.12.0-bin-scala_2.11.tgz
    -rw-r--r-- 1 hui wd  86486610 2月   9 15:23 kafka_2.12-3.0.0.tgz
    drwxr-xr-x 2 hui wd      4096 10月 13 00:00 kafka-eagle-bin-2.0.8
    -rw-r--r-- 1 hui wd  81074069 2月   9 15:23 kafka-eagle-bin-2.0.8.tar.gz
    -rw-r--r-- 1 hui wd  57452767 12月 17 2020 maxwell-1.25.0.tar.gz
    drwxr-xr-x 6 hui wd      4096 3月   4 2016 scala-2.11.8
    -rw-r--r-- 1 hui wd 220400553 3月  13 05:55 spark-3.0.3-bin-hadoop2.7.tgz
    [hui@hadoop103 software]$ cd  kafka-eagle-bin-2.0.8
    [hui@hadoop103 kafka-eagle-bin-2.0.8]$ ll
    总用量 79164
    -rw-r--r-- 1 hui wd 81062577 10月 13 00:00 efak-web-2.0.8-bin.tar.gz
    [hui@hadoop103 kafka-eagle-bin-2.0.8]$ tar -zxvf efak-web-2.0.8-bin.tar.gz -C /opt/module/

    修改名称

    [hui@hadoop103 module]$ mv efak-web-2.0.8 efak
    [hui@hadoop103 module]$ cd efak/

    修改配置文件/opt/module/efak/conf/system-config.properties

    hui@hadoop103 efak]$ vim /opt/module/efak/conf/system-config.properties
    ######################################
    # multi zookeeper & kafka cluster list
    # Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead
    ######################################
    efak.zk.cluster.alias=cluster1
    cluster1.zk.list=hadoop103:2181,hadoop104:2181,hadoop105:2181
    #cluster2.zk.list=xdn10:2181,xdn11:2181,xdn12:2181
    
    ######################################
    # zookeeper enable acl
    ######################################
    cluster1.zk.acl.enable=false
    cluster1.zk.acl.schema=digest
    cluster1.zk.acl.username=test
    cluster1.zk.acl.password=test123
    
    ######################################
    # broker size online list
    ######################################
    cluster1.efak.broker.size=20
    
    ######################################
    # zk client thread limit
    ######################################
    kafka.zk.limit.size=32
    
    ######################################
    # EFAK webui port
    ######################################
    efak.webui.port=8048
    
    ######################################
    # kafka jmx acl and ssl authenticate
    ######################################
    cluster1.efak.jmx.acl=false
    cluster1.efak.jmx.user=keadmin
    cluster1.efak.jmx.password=keadmin123
    cluster1.efak.jmx.ssl=false
    cluster1.efak.jmx.truststore.location=/data/ssl/certificates/kafka.truststore
    cluster1.efak.jmx.truststore.password=ke123456
    
    ######################################
    # kafka offset storage
    ######################################
    cluster1.efak.offset.storage=kafka
    #cluster2.efak.offset.storage=zk
    
    ######################################
    # kafka jmx uri
    ######################################
    cluster1.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
    
    ######################################
    # kafka metrics, 15 days by default
    ######################################
    efak.metrics.charts=true
    efak.metrics.retain=15
    
    ######################################
    # kafka sql topic records max
    ######################################
    efak.sql.topic.records.max=5000
    efak.sql.topic.preview.records.max=10
    
    ######################################
    # delete kafka topic token
    ######################################
    efak.topic.token=keadmin
    
    ######################################
    # kafka sasl authenticate
    ######################################
    cluster1.efak.sasl.enable=false
    cluster1.efak.sasl.protocol=SASL_PLAINTEXT
    cluster1.efak.sasl.mechanism=SCRAM-SHA-256
    cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafka-eagle";
    cluster1.efak.sasl.client.id=
    cluster1.efak.blacklist.topics=
    cluster1.efak.sasl.cgroup.enable=false
    cluster1.efak.sasl.cgroup.topics=
    cluster2.efak.sasl.enable=false
    cluster2.efak.sasl.protocol=SASL_PLAINTEXT
    cluster2.efak.sasl.mechanism=PLAIN
    cluster2.efak.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-eagle";
    cluster2.efak.sasl.client.id=
    cluster2.efak.blacklist.topics=
    cluster2.efak.sasl.cgroup.enable=false
    cluster2.efak.sasl.cgroup.topics=
    
    ######################################
    # kafka ssl authenticate
    ######################################
    cluster3.efak.ssl.enable=false
    cluster3.efak.ssl.protocol=SSL
    cluster3.efak.ssl.truststore.location=
    cluster3.efak.ssl.truststore.password=
    cluster3.efak.ssl.keystore.location=
    cluster3.efak.ssl.keystore.password=
    cluster3.efak.ssl.key.password=
    cluster3.efak.ssl.endpoint.identification.algorithm=https
    cluster3.efak.blacklist.topics=
    cluster3.efak.ssl.cgroup.enable=false
    cluster3.efak.ssl.cgroup.topics=
    
    ######################################
    # kafka sqlite jdbc driver address
    ######################################
    efak.driver=org.sqlite.JDBC
    efak.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
    efak.username=root
    efak.password=www.kafka-eagle.org
    
    ######################################
    # kafka mysql jdbc driver address
    ######################################
    efak.driver=com.mysql.jdbc.Driver
    efak.url=jdbc:mysql://hadoop103:3306/test?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
    efak.username=root
    efak.password=123465

    添加环境变量

    [hui@hadoop103 efak]$ sudo vim /etc/profile
    # kafkaEFAK
    export KE_HOME=/opt/module/efak
    export PATH=$PATH:$KE_HOME/bin
    [hui@hadoop103 efak]$ source /etc/profile

    启动 zk & kk

    [hui@hadoop103 efak]$ jps.sh
    ------------------- hui@hadoop103 --------------
    4448 Kafka
    4545 Jps
    2487 QuorumPeerMain
    ------------------- hui@hadoop104 --------------
    4231 Jps
    4141 Kafka
    2303 QuorumPeerMain
    ------------------- hui@hadoop105 --------------
    4193 Jps
    2278 QuorumPeerMain
    4104 Kafka

    启动监控程序

    [hui@hadoop103 efak]$ bin/ke.sh start
    Version 2.0.8 -- Copyright 2016-2021
    *******************************************************************
    * EFAK Service has started success.
    * Welcome, Now you can visit 'http://192.168.124.130:8048'
    * Account:admin ,Password:123456
    *******************************************************************
    * <Usage> ke.sh [start|status|stop|restart|stats] </Usage>
    * <Usage> https://www.kafka-eagle.org/ </Usage>
    *******************************************************************

    http://192.168.124.130:8048/

    通过提示的地址进行访问监控页面

    关闭监控程序 

    bin/ke.sh stop

    4、Kafka-Eagle页面操作

  • 相关阅读:
    学习web前端要去一线就业吗
    程序员什么时候该考虑跳槽
    前端工程师应该具备怎样的一种技术水平
    如何掌握学习移动端Web页面布局
    如何优化Web前端技术开发生态体系
    想进名企大厂?阿里程序员给你三点建议
    对即将入职前端工作的新人有哪些建议?
    Java基础学习之快速掌握Session和cookie
    Java入门学习之JDK介绍与初次编程实现
    Java编译的运行机制初步讲解
  • 原文地址:https://www.cnblogs.com/wdh01/p/16102792.html
Copyright © 2020-2023  润新知