• 2511-Druid监控功能的深入使用与配置-如何记录监控数据(基于logback)


    Druid的监控很强大,但可惜的是监控数据是存在内存中的,需求就是定时把监控数据记录下来,以日志文件的形式或者数据库入库。

    记录两种方式:

    1. 数据库入库
    2. logback形式记录

    原理(重点)

    1. 如果仅仅想记录sql的监控,可以自己重写DruidDataSourceStatLogger的log方法,这个方式是Druid记录日志的默认接口,在此略过。
    2. 使用内部接口,直接获取元数据。
      image

    Druid包中有个DruidStatService类,这个是监控的业务类。

    其中有个service方法, public String service(String url) ,参数是形如/sql.json的字符串,service方法根据不同的参数,获取不同的监控数据,返回的字符串即已经序列化后的监控数据JSON字符串。

    例如,"/basic.json" 就可以获取基础数据,"/weburi.json" 就可以获取URI页面的数据。

    利用这个业务接口,即可获取我们想要的监控数据。

    使用springboot的定时任务,可以方便的定时执行日志记录。直接上代码。

    说明:SyslogService是自定义的业务类,用于持久化日志,可注释掉。

    package com.company.project.timetask;
    
    import com.alibaba.druid.stat.DruidStatService;
    import com.alibaba.fastjson.JSONObject;
    import com.company.project.model.Syslog;
    import com.company.project.service.SyslogService;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.scheduling.annotation.Async;
    import org.springframework.scheduling.annotation.Scheduled;
    import org.springframework.stereotype.Component;
    
    import java.time.LocalDateTime;
    import java.util.Date;
    
    
    /**
     * 记录Druid的监控信息
     */
    @Component
    public class DruidLogTask {
    
        private static Logger myLogger = LoggerFactory.getLogger(DruidLogTask.class);
    
        // 获取DruidStatService
        private DruidStatService druidStatService = DruidStatService.getInstance();
    
        // 是否是重启后的第一次记录
        private boolean isFirstflag = true;
    
    //    @Autowired
    //    private SyslogService syslogService;
    
        // 启动后延迟5秒调用  每5*60*1000即5分钟记录一次
    //    @Scheduled(initialDelay = 5000, fixedDelay = 300000)
        @Scheduled(initialDelay = 5000, fixedDelay = 20000)
        @Async// 定时任务异步化  还需在启动类上加@EnableAsync
        public void log() throws InterruptedException {
    
            // 首次启动标志
            if (isFirstflag) {
                myLogger.info("===============已重启,重启时间是{},开始新的记录===============", LocalDateTime.now().toString());
                isFirstflag = !isFirstflag;
    //            Syslog newLog = new Syslog();
    //            newLog.setLogType("druidLog");
    //            newLog.setLogBody("检测到重启");
    //            newLog.setCreatTime(new Date());
    //            syslogService.save(newLog);
            }
    
            JSONObject allResult = new JSONObject(16, true);
    
            // 首页信息
            String basicJson = druidStatService.service("/basic.json");
            // 数据源
            String datasourceJson = druidStatService.service("/datasource.json");
            // SQL监控
            String sqlJson = druidStatService.service("/sql.json?orderBy=SQL&orderType=desc&page=1&perPageCount=1000000&");
            // SQL防火墙
            String wallJson = druidStatService.service("/wall.json");
            // web应用
            String webappJson = druidStatService.service("/webapp.json");
            // URI监控
            String weburiJson = druidStatService.service("/weburi.json?orderBy=URI&orderType=desc&page=1&perPageCount=1000000&");
            // session监控
            String websessionJson = druidStatService.service("/websession.json");
            // spring监控
            String springJson = druidStatService.service("/spring.json");
    
            allResult.put("/basic.json", JSONObject.parseObject(basicJson));
            allResult.put("/datasource.json", JSONObject.parseObject(datasourceJson));
            allResult.put("/sql.json", JSONObject.parseObject(sqlJson));
            allResult.put("/wall.json", JSONObject.parseObject(wallJson));
            allResult.put("/webapp.json", JSONObject.parseObject(webappJson));
            allResult.put("/weburi.json", JSONObject.parseObject(weburiJson));
            allResult.put("/websession.json", JSONObject.parseObject(websessionJson));
            allResult.put("/spring.json", JSONObject.parseObject(springJson));
    
            String allResultJsonString = allResult.toJSONString();
            myLogger.info("Druid监控定时记录,allResult==={}", allResultJsonString);
    
    //        Syslog newLog = new Syslog();
    //        newLog.setLogType("druidLog");
    //        newLog.setLogBody(allResultJsonString);
    //        newLog.setCreatTime(new Date());
    //        syslogService.save(newLog);
        }
    }
    
    

    使用logback记录数据到日志

    主要是使用了内置的RollingFileAppender和自定义logger指定类进行记录

    核心配置

        <!--配置变量-->
        <!--文件路径前缀-->
        <property name="LOG_HOME_PATH" value="file_logs"/>
        <property name="encoder_pattern" value="%d{yyyy/MM/dd HH:mm:ss.SSS} %-5level [%thread] [%c{0}:%L] : %msg%n"/>
        <property name="maxHistory" value="60"/>
        <property name="maxFileSize" value="10MB"/>
    
    
        <appender name="druidSqlRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/druid-sql.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
    
        </appender>
    
        <!--配置druid的SQL日志输出-->
        <logger name="druid.sql.Statement" level="DEBUG" additivity="false">
            <appender-ref ref="druidSqlRollingFile" />
        </logger>
    
    
    

    如果文件的写压力比较大,还可以再引用一层异步队列appender,这个AsyncAppender也是logback提供好的。

    完整的logback配置文件:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration scan="true" scanPeriod="60 seconds" debug="false">
    
        <!--配置变量-->
        <!--文件路径前缀-->
        <property name="LOG_HOME_PATH" value="file_logs"/>
        <property name="encoder_pattern" value="%d{yyyy/MM/dd HH:mm:ss.SSS} %-5level [%thread] [%c{0}:%L] : %msg%n"/>
        <property name="maxHistory" value="60"/>
        <property name="maxFileSize" value="10MB"/>
    
    
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
        </appender>
    
        <appender name="FILE_All" class="ch.qos.logback.core.rolling.RollingFileAppender">
    
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/level_all.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
    
        </appender>
    
        <appender name="FILE_INFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
    
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/level_info.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.LevelFilter">
                <level>INFO</level>
                <onMatch>ACCEPT</onMatch>
                <onMismatch>DENY</onMismatch>
            </filter>
        </appender>
    
    
        <appender name="FILE_DEBUG" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    
                <fileNamePattern>${LOG_HOME_PATH}/level_debug.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.LevelFilter">
                <level>DEBUG</level>
                <onMatch>ACCEPT</onMatch>
                <onMismatch>DENY</onMismatch>
            </filter>
        </appender>
    
        <appender name="FILE_ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/level_error.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.LevelFilter">
                <level>ERROR</level>
                <onMatch>ACCEPT</onMatch>
                <onMismatch>DENY</onMismatch>
            </filter>
        </appender>
    
        <appender name="FILE_CONTROLLER_LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
    
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/controller_log.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
            <filter class="ch.qos.logback.classic.filter.LevelFilter">
                <level>INFO</level>
                <onMatch>ACCEPT</onMatch>
                <onMismatch>DENY</onMismatch>
            </filter>
        </appender>
    
    
    
        <appender name="druidSqlRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/druid-sql.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
    
        </appender>
    
    
        <appender name="druidMonitorRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    
            <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>${LOG_HOME_PATH}/druid-monitor.%d.%i.log</fileNamePattern>
                <maxHistory>${maxHistory}</maxHistory>
                <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                    <maxFileSize>${maxFileSize}</maxFileSize>
                </timeBasedFileNamingAndTriggeringPolicy>
            </rollingPolicy>
            <encoder>
                <pattern>${encoder_pattern}</pattern>
                <charset>UTF-8</charset>
            </encoder>
        </appender>
    
        <!--自定义logback的扩展appender-->
        <!--    <appender name="FILE_SELF" class="com.company.project.core.log.MyAppender">
    
                <encoder>
                    <pattern>%d{yyyy/MM/dd HH:mm:ss.SSS} %-5level [%thread] [%c{0}:%L] : %msg%n</pattern>
                    <charset>UTF-8</charset>
                </encoder>
                <filter class="ch.qos.logback.classic.filter.LevelFilter">
                    <level>ERROR</level>
                    <onMatch>ACCEPT</onMatch>
                    <onMismatch>DENY</onMismatch>
                </filter>
            </appender>-->
    
        <!-- 异步INFO输出 -->
        <appender name ="ASYNC_INFO" class= "ch.qos.logback.classic.AsyncAppender">
            <!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
            <discardingThreshold>0</discardingThreshold>
            <!-- 更改默认的队列的深度,该值会影响性能.默认值为256 -->
            <queueSize>512</queueSize>
            <!-- 添加附加的appender,最多只能添加一个 -->
            <appender-ref ref ="FILE_INFO"/>
        </appender>
    
        <!-- 异步输出 -->
        <appender name ="ASYNC_CONTROLLER_LOG" class= "ch.qos.logback.classic.AsyncAppender">
            <!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
            <discardingThreshold>0</discardingThreshold>
            <!-- 更改默认的队列的深度,该值会影响性能.默认值为256 -->
            <queueSize>512</queueSize>
            <!-- 添加附加的appender,最多只能添加一个 -->
            <appender-ref ref ="FILE_CONTROLLER_LOG"/>
        </appender>
    
        <!-- 控制台输出日志级别 -->
        <root level="DEBUG">
            <appender-ref ref="STDOUT"/>
            <appender-ref ref="FILE_All"/>
            <appender-ref ref="FILE_DEBUG"/>
            <appender-ref ref="ASYNC_INFO"/>
            <appender-ref ref="FILE_ERROR"/>
            <!--<appender-ref ref="FILE_SELF"/>-->
        </root>
    
        <!--配置druid的SQL日志输出-->
        <logger name="druid.sql.Statement" level="DEBUG" additivity="false">
            <appender-ref ref="druidSqlRollingFile" />
        </logger>
    
        <!--配置druid的监控日志输出-->
        <!--<logger name="com.company.project.support.druid.MyDruidDataSourceStatLoggerAdapter" level="DEBUG" additivity="false">-->
            <!--<appender-ref ref="druidMonitorRollingFile" />-->
        <!--</logger>-->
    
        <!--配置定时任务DruidLogTask的监控日志输出-->
        <logger name="com.company.project.timetask.DruidLogTask" level="DEBUG" additivity="false">
            <appender-ref ref="druidMonitorRollingFile" />
        </logger>
    
        <!--配置aop对controller参数日志的监控-->
        <logger name="com.company.project.support.aop.ControllerLogAop" level="INFO" additivity="false">
            <appender-ref ref="ASYNC_CONTROLLER_LOG" />
        </logger>
    
    
        <!-- <logger name="com.mchange" level="ERROR" /> -->
         <logger name="org.springframework" level="ERROR" />
         <logger name="org.mybatis" level="ERROR" />
        <!-- <logger name="org.apache.activemq" level="ERROR" /> -->
    
        <logger name="java.sql.Connection" level="DEBUG" />
        <logger name="java.sql.Statement" level="DEBUG" />
        <logger name="java.sql.PreparedStatement" level="DEBUG" />
    
        <logger name="org.springframework.scheduling" level="INFO"/>
        <logger name="org.springframework.session" level="INFO"/>
    
        <logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
        <logger name="org.apache.catalina.util.LifecycleBase" level="ERROR"/>
        <logger name="org.apache.coyote.http11.Http11NioProtocol" level="WARN"/>
        <logger name="org.apache.sshd.common.util.SecurityUtils" level="WARN"/>
        <logger name="org.apache.tomcat.util.net.NioSelectorPool" level="WARN"/>
        <logger name="org.crsh.plugin" level="WARN"/>
        <logger name="org.crsh.ssh" level="WARN"/>
        <logger name="org.eclipse.jetty.util.component.AbstractLifeCycle" level="ERROR"/>
        <logger name="org.hibernate.validator.internal.util.Version" level="WARN"/>
        <logger name="org.springframework.boot.actuate.autoconfigure.CrshAutoConfiguration" level="WARN"/>
    
        <!-- 级别依次为【从高到低】:FATAL > ERROR > WARN > INFO > DEBUG > TRACE  -->
    
    </configuration>
    
  • 相关阅读:
    在模板生成页面的时候,页面里的标签可能会生成多个id,这时候使用id选择器,往往只能取到第一个id的元素。
    后台返回model里的时间格式,用@JsonFormat是没用的,它只有在返回JSON数据的时候生效,我脑抽了
    thymeleaf关于 Error resolving template “index”, template might not exist or might not be accessible by any of the configured Template Resolvers
    thymeleaf 配合 Spring Security 权限判断时,sec:authentication无法取到值(null)
    MySQL 常用30种SQL查询语句优化方法
    Linux常用命令
    APP微信登录---第三方登录
    关于文件的工具类例子
    Java时间戳与日期格式字符串的互转
    Java字符串与文件的互转操作
  • 原文地址:https://www.cnblogs.com/starmoon1994/p/9125986.html
Copyright © 2020-2023  润新知