• LogBack日志异步推送kafka并规范日志输出格式


    我们无需关心Logback版本,只需关注Boot版本即可,Parent工程自动集成了Logback。Springboot本身就可以打印日志,为什么还需要规范日志?

    • 日志统一,方便查阅管理。
    • 日志归档功能。
    • 日志持久化功能。
    • 分布式日志查看功能(ELK),方便搜索和查阅。

    关于Logback的介绍就略过了,下面进入代码阶段。本文主要有以下几个功能:

    • 重新规定日志输出格式。
    • 自定义指定包下的日志输出级别。
    • 按模块输出日志。
    • 日志异步推送Kafka

    POM文件

    如果需要将日志持久化到磁盘,则引入如下两个依赖(不需要推送Kafka也可以引入)。

    <properties>
        <logback-kafka-appender.version>0.2.0-RC1</logback-kafka-appender.version>
        <janino.version>2.7.8</janino.version>
    </properties>
    <!-- 将日志输出到Kafka -->
    <dependency>
        <groupId>com.github.danielwegener</groupId>
        <artifactId>logback-kafka-appender</artifactId>
        <version>${logback-kafka-appender.version}</version>
        <scope>runtime</scope>
    </dependency>
    
    <!-- 在xml中使用<if condition>的时候用到的jar包 -->
    <dependency>
        <groupId>org.codehaus.janino</groupId>
        <artifactId>janino</artifactId>
        <version>${janino.version}</version>
    </dependency>
    

    配置文件

    resource文件夹下有三个配置文件。

    1. logback-defaults.xml
    2. logback-pattern.xml
    3. logback-spring.xml

    logback-spring.xml

    <?xml version="1.0" encoding="UTF-8"?>
    
    <configuration>
        <include resource="logging/logback-pattern.xml"/>
        <include resource="logging/logback-defaults.xml"/>
    </configuration>
    
    

    logback-defaults.xml

    <?xml version="1.0" encoding="UTF-8"?>
    
    <included>
    
        <!-- spring日志 -->
        <property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
    
        <!-- 定义日志文件的输出路径 -->
        <property name="LOG_HOME" value="${LOG_PATH:-/tmp}"/>
    
        <!--
            将日志追加到控制台(默认使用LogBack已经实现好的)
            进入文件,其中<logger>用来设置某一个包或者具体的某一个类的日志打印级别
         -->
        <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
        <include resource="logback-pattern.xml"/>
    
        <!--定义日志文件大小 超过这个大小会压缩归档 -->
        <property name="INFO_MAX_FILE_SIZE" value="100MB"/>
        <property name="ERROR_MAX_FILE_SIZE" value="100MB"/>
        <property name="TRACE_MAX_FILE_SIZE" value="100MB"/>
        <property name="WARN_MAX_FILE_SIZE" value="100MB"/>
    
        <!--定义日志文件最长保存时间 -->
        <property name="INFO_MAX_HISTORY" value="9"/>
        <property name="ERROR_MAX_HISTORY" value="9"/>
        <property name="TRACE_MAX_HISTORY" value="9"/>
        <property name="WARN_MAX_HISTORY" value="9"/>
    
        <!--定义归档日志文件最大保存大小,当所有归档日志大小超出定义时,会触发删除  -->
        <property name="INFO_TOTAL_SIZE_CAP" value="5GB"/>
        <property name="ERROR_TOTAL_SIZE_CAP" value="5GB"/>
        <property name="TRACE_TOTAL_SIZE_CAP" value="5GB"/>
        <property name="WARN_TOTAL_SIZE_CAP" value="5GB"/>
    
        <!-- 按照每天生成日志文件 -->
        <appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <!-- 当前Log文件名 -->
            <file>${LOG_HOME}/info.log</file>
            <!-- 压缩备份设置 -->
            <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME}/backup/info/info.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
                <maxHistory>${INFO_MAX_HISTORY}</maxHistory>
                <maxFileSize>${INFO_MAX_FILE_SIZE}</maxFileSize>
                <totalSizeCap>${INFO_TOTAL_SIZE_CAP}</totalSizeCap>
            </rollingPolicy>
            <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
                <pattern>${FILE_LOG_PATTERN}</pattern>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
                <level>INFO</level>
            </filter>
        </appender>
    
        <appender name="WARN_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <!-- 当前Log文件名 -->
            <file>${LOG_HOME}/warn.log</file>
            <!-- 压缩备份设置 -->
            <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME}/backup/warn/warn.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
                <maxHistory>${WARN_MAX_HISTORY}</maxHistory>
                <maxFileSize>${WARN_MAX_FILE_SIZE}</maxFileSize>
                <totalSizeCap>${WARN_TOTAL_SIZE_CAP}</totalSizeCap>
            </rollingPolicy>
            <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
                <pattern>${FILE_LOG_PATTERN}</pattern>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.LevelFilter">
                <level>WARN</level>
                <onMatch>ACCEPT</onMatch>
                <onMismatch>DENY</onMismatch>
            </filter>
        </appender>
    
        <appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <!-- 当前Log文件名 -->
            <file>${LOG_HOME}/error.log</file>
            <!-- 压缩备份设置 -->
            <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME}/backup/error/error.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
                <maxHistory>${ERROR_MAX_HISTORY}</maxHistory>
                <maxFileSize>${ERROR_MAX_FILE_SIZE}</maxFileSize>
                <totalSizeCap>${ERROR_TOTAL_SIZE_CAP}</totalSizeCap>
            </rollingPolicy>
            <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
                <pattern>${FILE_LOG_PATTERN}</pattern>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.LevelFilter">
                <level>ERROR</level>
                <onMatch>ACCEPT</onMatch>
                <onMismatch>DENY</onMismatch>
            </filter>
        </appender>
    
        <!-- Kafka的appender -->
        <appender name="KAFKA" class="com.github.danielwegener.logback.kafka.KafkaAppender">
            <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
                <pattern>${FILE_LOG_PATTERN}</pattern>
            </encoder>
            <topic>${kafka_env}applog_${spring_application_name}</topic>
            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
            <producerConfig>bootstrap.servers=${kafka_broker}</producerConfig>
            <!-- don't wait for a broker to ack the reception of a batch.  -->
            <producerConfig>acks=0</producerConfig>
            <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
            <producerConfig>linger.ms=1000</producerConfig>
            <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
            <producerConfig>max.block.ms=0</producerConfig>
            <!-- Optional parameter to use a fixed partition -->
            <partition>8</partition>
        </appender>
    
        <appender name="KAFKA_ASYNC" class="ch.qos.logback.classic.AsyncAppender">
            <appender-ref ref="KAFKA" />
        </appender>
    
        <root level="INFO">
            <appender-ref ref="CONSOLE"/>
            <appender-ref ref="INFO_FILE"/>
            <appender-ref ref="WARN_FILE"/>
            <appender-ref ref="ERROR_FILE"/>
            <if condition='"true".equals(property("kafka_enabled"))'>
                <then>
                    <appender-ref ref="KAFKA_ASYNC"/>
                </then>
            </if>
        </root>
    
    </included>
    

    注意:

    • 上面的<partition>8</partition>指的是将消息发送到哪个分区,如果你主题的分区为0~7,那么会报错,解决办法是要么去掉这个属性,要么指定有效的分区。
    • HostNameKeyingStrategy是用来指定key的生成策略,我们知道kafka是根据key来判定将消息发送到哪个分区上的,此种是根据主机名来判定,这样带来的好处是每台服务器生成的日志都是在同一个分区上面,从而保证了时间顺序。但默认的是NoKeyKeyingStrategy,会随机分配到各个分区上面,这样带来的坏处是,无法保证日志的时间顺序,不推荐这样来记录日志。

    logback-pattern.xml

    <?xml version="1.0" encoding="UTF-8"?>
    
    <included>
    
        <!-- 日志展示规则,比如彩色日志、异常日志等 -->
        <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
        <conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
        <conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
    
        <!-- 自定义日志展示规则 -->
        <conversionRule conversionWord="ip" converterClass="com.ryan.utils.IPAddressConverter" />
        <conversionRule conversionWord="module" converterClass="com.ryan.utils.ModuleConverter" />
    
        <!-- 上下文属性 -->
        <springProperty scope="context" name="spring_application_name" source="spring.application.name" />
        <springProperty scope="context" name="server_port" source="server.port" />
        <!-- Kafka属性配置 -->
        <springProperty scope="context" name="spring_application_name" source="spring.application.name" />
        <springProperty scope="context" name="kafka_enabled" source="ryan.web.logging.kafka.enabled"/>
        <springProperty scope="context" name="kafka_broker" source="ryan.web.logging.kafka.broker"/>
        <springProperty scope="context" name="kafka_env" source="ryan.web.logging.kafka.env"/>
    
        <!-- 日志输出的格式如下 -->
        <!-- appID | module |  dateTime | level | requestID | traceID | requestIP | userIP | serverIP | serverPort | processID | thread | location | detailInfo-->
    
        <!-- CONSOLE_LOG_PATTERN属性会在console-appender.xml文件中引用 -->
        <property name="CONSOLE_LOG_PATTERN" value="%clr(${spring_application_name}){cyan}|%clr(%module){blue}|%clr(%d{ISO8601}){faint}|%clr(%p)|%X{requestId}|%X{X-B3-TraceId:-}|%X{requestIp}|%X{userIp}|%ip|${server_port}|${PID}|%clr(%t){faint}|%clr(%.40logger{39}){cyan}.%clr(%method){cyan}:%L|%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
    
        <!-- FILE_LOG_PATTERN属性会在logback-defaults.xml文件中引用 -->
        <property name="FILE_LOG_PATTERN" value="${spring_application_name}|%module|%d{ISO8601}|%p|%X{requestId}|%X{X-B3-TraceId:-}|%X{requestIp}|%X{userIp}|%ip|${server_port}|${PID}|%t|%.40logger{39}.%method:%L|%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
    
        <!--
            将 org/springframework/boot/logging/logback/defaults.xml 文件下的默认logger写过来
         -->
        <logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
        <logger name="org.apache.catalina.util.LifecycleBase" level="ERROR"/>
        <logger name="org.apache.coyote.http11.Http11NioProtocol" level="WARN"/>
        <logger name="org.apache.sshd.common.util.SecurityUtils" level="WARN"/>
        <logger name="org.apache.tomcat.util.net.NioSelectorPool" level="WARN"/>
        <logger name="org.eclipse.jetty.util.component.AbstractLifeCycle" level="ERROR"/>
        <logger name="org.hibernate.validator.internal.util.Version" level="WARN"/>
    
    </included>
    
    

    自定义获取moudle

    /**
     * 获取日志模块的名称
     *
     * @author zhangjianbing
     * time 2019/7/9
     */
    public class ModuleConverter extends ClassicConverter {
    
        private static final int MAX_LENGTH = 20;
    
        @Override
        public String convert(ILoggingEvent event) {
            if (event.getLoggerName().length() > MAX_LENGTH) {
                return "";
            } else {
                return event.getLoggerName();
            }
        }
    }
    

    自定义获取ip

    /**
     * 获取IP地址
     *
     * @author zhangjianbing
     * time 2019/7/9
     */
    @Slf4j
    public class IPAddressConverter extends ClassicConverter {
    
        private static String ipAddress;
    
        static {
            try {
                ipAddress = InetAddress.getLocalHost().getHostAddress();
            } catch (UnknownHostException e) {
                log.error("fetch localhost host address failed", e);
                ipAddress = "UNKNOWN";
            }
        }
    
        @Override
        public String convert(ILoggingEvent event) {
            return ipAddress;
        }
    }
    

    按模块输出

    @Slf4j加上topic

    /**
     * @author zhangjianbing
     * time 2019/7/9
     */
    @RestController
    @RequestMapping(value = "/portal")
    @Slf4j(topic = "LogbackController")
    public class LogbackController {
    
        @RequestMapping(value = "/gohome")
        public void m1() {
            log.info("buddy,we go home~");
        }
    
    }
    

    自定义日志级别

    如果想打印SQL语句,需要将日志级别设置成debug级别。

    logging.path = /tmp
    logging.level.com.ryan.trading.account.dao = debug
    

    推送Kafka

    ryan.web.logging.kafka.enabled=true
    #多个broker用英文逗号分隔
    ryan.web.logging.kafka.broker=127.0.0.1:9092
    #创建Kafka的topic时使用
    ryan.web.logging.kafka.env=test
    

    日志输出格式说明

    名称 含义 格式
    AppID 应用标识
    Module 模块/子系统
    DateTime 日期时间 TimeStamp
    Level 日志级别 Level
    RequestID 请求标识
    TraceID 调用链标识
    RequestIP 请求IP IP
    UserIP 用户IP IP
    ServerIP 服务器IP IP
    ServerPort 服务器端口 Port
    ProcessID 进程标识
    Thread 线程名称
    Location 代码位置
    DetailInfo 详细日志

    启动示例

    logback-framework-project||2019-07-09 21:25:48,135|INFO|||||192.168.0.102|8080|49877|main|com.ryan.LogbackBootStrap.logStarting:50|Starting LogbackBootStrap on bjw0101020035.lhwork.net with PID 49877 (/Users/zhangjianbing/learn-note/logback-learn-note/target/classes started by zhangjianbing in /Users/zhangjianbing/learn-note)
    logback-framework-project||2019-07-09 21:25:48,138|INFO|||||192.168.0.102|8080|49877|main|com.ryan.LogbackBootStrap.logStartupProfileInfo:652|No active profile set, falling back to default profiles: default
    logback-framework-project||2019-07-09 21:25:48,248|INFO|||||192.168.0.102|8080|49877|main|ConfigServletWebServerApplicationContext.prepareRefresh:589|Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@1d2bd371: startup date [Tue Jul 09 21:25:48 CST 2019]; root of context hierarchy
    logback-framework-project||2019-07-09 21:25:50,155|INFO|||||192.168.0.102|8080|49877|main|o.s.b.w.embedded.tomcat.TomcatWebServer.initialize:91|Tomcat initialized with port(s): 8080 (http)
    logback-framework-project||2019-07-09 21:25:50,249|INFO|||||192.168.0.102|8080|49877|main|o.apache.catalina.core.StandardService.log:180|Starting service [Tomcat]
    logback-framework-project||2019-07-09 21:25:50,249|INFO|||||192.168.0.102|8080|49877|main|org.apache.catalina.core.StandardEngine.log:180|Starting Servlet Engine: Apache Tomcat/8.5.28
    logback-framework-project||2019-07-09 21:25:50,256|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.a.catalina.core.AprLifecycleListener.log:180|The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/zhangjianbing/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
    logback-framework-project||2019-07-09 21:25:50,405|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.a.c.c.C.[Tomcat].[localhost].[/].log:180|Initializing Spring embedded WebApplicationContext
    logback-framework-project||2019-07-09 21:25:50,406|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.web.context.ContextLoader.prepareWebApplicationContext:285|Root WebApplicationContext: initialization completed in 2159 ms
    logback-framework-project||2019-07-09 21:25:50,566|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.ServletRegistrationBean.addRegistration:185|Servlet dispatcherServlet mapped to [/]
    logback-framework-project||2019-07-09 21:25:50,572|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'characterEncodingFilter' to: [/*]
    logback-framework-project||2019-07-09 21:25:50,573|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
    logback-framework-project||2019-07-09 21:25:50,573|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'httpPutFormContentFilter' to: [/*]
    logback-framework-project||2019-07-09 21:25:50,573|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'requestContextFilter' to: [/*]
    logback-framework-project||2019-07-09 21:25:50,966|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerAdapter.initControllerAdviceCache:567|Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@1d2bd371: startup date [Tue Jul 09 21:25:48 CST 2019]; root of context hierarchy
    logback-framework-project||2019-07-09 21:25:51,097|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerMapping.register:548|Mapped "{[/portal/gohome]}" onto public void com.ryan.logback.LogbackController.m1()
    logback-framework-project||2019-07-09 21:25:51,111|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerMapping.register:548|Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
    logback-framework-project||2019-07-09 21:25:51,113|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerMapping.register:548|Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
    logback-framework-project||2019-07-09 21:25:51,164|INFO|||||192.168.0.102|8080|49877|main|o.s.w.s.handler.SimpleUrlHandlerMapping.registerHandler:373|Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
    logback-framework-project||2019-07-09 21:25:51,165|INFO|||||192.168.0.102|8080|49877|main|o.s.w.s.handler.SimpleUrlHandlerMapping.registerHandler:373|Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
    logback-framework-project||2019-07-09 21:25:51,262|INFO|||||192.168.0.102|8080|49877|main|o.s.w.s.handler.SimpleUrlHandlerMapping.registerHandler:373|Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
    logback-framework-project||2019-07-09 21:25:51,540|INFO|||||192.168.0.102|8080|49877|main|o.s.j.e.a.AnnotationMBeanExporter.afterSingletonsInstantiated:434|Registering beans for JMX exposure on startup
    logback-framework-project||2019-07-09 21:25:51,603|INFO|||||192.168.0.102|8080|49877|main|o.s.b.w.embedded.tomcat.TomcatWebServer.start:205|Tomcat started on port(s): 8080 (http) with context path ''
    logback-framework-project||2019-07-09 21:25:51,608|INFO|||||192.168.0.102|8080|49877|main|com.ryan.LogbackBootStrap.logStarted:59|Started LogbackBootStrap in 5.001 seconds (JVM running for 7.818)
    

    全文完,本文用到的appender来自:https://github.com/danielwegener/logback-kafka-appender

  • 相关阅读:
    语义分割之BiSeNet
    语义分割之ENet, LinkNet
    语义分割之DFN
    语义分割之GCN
    语义分割之DeepLab系列
    语义分割之SegNet
    语义分割之U-Net和FusionNet
    语义分割之FCN
    pytorch,python学习
    CV baseline之SENet
  • 原文地址:https://www.cnblogs.com/zhangjianbing/p/15895562.html
Copyright © 2020-2023  润新知