• 【kafka学习之五】kafka运维:kafka操作日志设置和主题删除


    一、操作日志

    首先附上kafka 操作日志配置文件:log4j.properties

    根据相应的需要设置日志。

    #日志级别覆盖规则  优先级:ALL < DEBUG < INFO <WARN < ERROR < FATAL < OFF
    #1.子日志log4j.logger会覆盖主日志log4j.rootLogger,这里设置的是日志输出级别,Threshold设置appender的日志接收级别;
    #2.log4j.logger级别低于Threshold,appender接收级别取决于Threshold级别;
    #3.log4j.logger级别高于Threshold,appender接收级别取决于log4j.logger级别,因为输出里就没有Threshold要求的日志;
    #4.子logger设置,主要与rootLogger区分开打印日志 一般与log4j.additivity配合使用
    #log4j.additivity 是否继承父Logger的输出源(appender),默认是true 
    #true 在stdout, kafkaAppender里输出 也会在stateChangeAppender输出 
    #这里需要单独输出 所以设置为false 只会在stateChangeAppender输出
    #log4j.logger后面如果没有appender,则默认使用log4j.rootLogger后面设置的appender
    #如果使用
    org.apache.log4j.RollingFileAppender 可以使用MaxFileSize设置最大文件大小 MaxBackupIndex设置最大文件数量

    #主日志设置 log4j.rootLogger
    =ERROR, stdout, kafkaAppender #控制台的appender和layout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.kafkaAppender.Append=true log4j.appender.kafkaAppender.Threshold=ERROR log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n #kafkaAppender的appender和layout log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log log4j.appender.kafkaAppender.Append=true log4j.appender.kafkaAppender.Threshold=ERROR log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #状态变化日志 log4j.logger.state.change.logger=ERROR, stateChangeAppender log4j.additivity.state.change.logger=false log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #请求处理 log4j.logger.kafka.request.logger=ERROR, requestAppender log4j.additivity.kafka.request.logger=false log4j.logger.kafka.network.Processor=ERROR, requestAppender log4j.additivity.kafka.network.Processor=false log4j.logger.kafka.server.KafkaApis=ERROR, requestAppender log4j.additivity.kafka.server.KafkaApis=false log4j.logger.kafka.network.RequestChannel$=ERROR, requestAppender log4j.additivity.kafka.network.RequestChannel$=false log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #kafka-logs清理 log4j.logger.kafka.log.LogCleaner=ERROR, cleanerAppender log4j.additivity.kafka.log.LogCleaner=false log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #controller log4j.logger.kafka.controller=ERROR, controllerAppender log4j.additivity.kafka.controller=false log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #authorizer log4j.logger.kafka.authorizer.logger=ERROR, authorizerAppender log4j.additivity.kafka.authorizer.logger=false log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #ZkClient log4j.logger.org.I0Itec.zkclient.ZkClient=ERROR #zookeeper log4j.logger.org.apache.zookeeper=ERROR #kafka log4j.logger.kafka=ERROR #org.apache.kafka log4j.logger.org.apache.kafka=ERROR

    其次 kafka默认打印GC日志,如下,

    [cluster@PCS102 logs]$ ls
    kafka-authorizer.log          kafkaServer-gc.log.3  kafkaServer-gc.log.8      server.log.2018-10-22-14
    kafka-request.log             kafkaServer-gc.log.4  kafkaServer-gc.log.9      server.log.2018-10-22-15
    kafkaServer-gc.log.0          kafkaServer-gc.log.5  kafkaServer.out
    kafkaServer-gc.log.1          kafkaServer-gc.log.6  server.log
    kafkaServer-gc.log.2.current  kafkaServer-gc.log.7  server.log.2018-10-22-13

    生产是不需要的   需要关掉,kafka home bin目录下面有个kafka-run-class.sh脚本  vim编辑一下

    将参数 KAFKA_GC_LOG_OPTS=" " 设置为空格即可,重启kafka之后就不再打印GC日志了。

    [cluster@PCS102 bin]$ vim kafka-run-class.sh
    
    GC_FILE_SUFFIX='-gc.log'
    GC_LOG_FILE_NAME=''
    if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then
      GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX
      KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
    KAFKA_GC_LOG_OPTS=" "
    fi

    可以写个定时清理脚本来清除日志结合 crontab :0 0 2 * * ? /home/cluster/kafka211/bin/cleanupkafkalog.sh

    #!/bin/bash
    
    # log dir
    logDir=/home/cluster/kafka211/logs
    #keep 60 file
    count=60
    count=$[$count+1]
    LOGNUM=`ls -l /home/cluster/kafka211/logs/server.log.* |wc -l`
    if [ $LOGNUM -gt 0 ]; then
        ls -t $logDir/server.log.* | tail -n +$count | xargs rm -f
    fi
    
    #kafkaServer.out 
    if [ -e "$logDir/kafkaServer.out" ]; then
        rm -f /home/cluster/kafka211/logs/kafkaServer.out
    fi

    二、删除主题和主题对应消息数据

    举例删除主题:t1205
    (1)在kafka集群中删除topic,当前topic被标记成删除。
    ./kafka-topics.sh --zookeeper node3:2181,node4:2181,node5:2181 --delete --topic t1205

    (2)在每台broker节点上删除当前这个topic对应的真实数据。
    删除kafka相关的数据目录,数据目录请参考目标机器上的kafka配置:server.properties -> log.dirs=/var/kafka/log/tmp
    rm -r /var/kafka/log/tmp/t1205*

    (3)进入zookeeper客户端,删除topic信息
    rmr /brokers/topics/t1205

    (4)删除zookeeper中被标记为删除的topic信息
    rmr /admin/delete_topics/t1205


    最后重启ZK和kafka集群,查看是否还有
    ./kafka-topics.sh --list --zookeeper node3:2181,node4:2181,node5:2181

  • 相关阅读:
    console.dir()和console.log()的区别
    md5
    sorket is closed
    箱形图和小提琴图
    PCA降维
    模式识别与机器学习(二)
    模式识别与机器学习(一)
    论文研读Unet++
    分类中使用的一些评价指标
    前列腺分割论文
  • 原文地址:https://www.cnblogs.com/cac2020/p/9831655.html
Copyright © 2020-2023  润新知