做网站为什么要买网站空间,找深圳网站建设,广州教育平台登录入口,网站建设总结材料1. 规范了日志的打印格式
2. 增加了彩色日志输出
3. 支持异步推送kafka
4. 日志文件压缩功能 我们无需关心 Logback 版本#xff0c;只需关注 Boot 版本即可#xff0c;Parent 工程自动集成了 Logback。Springboot 本身就可以打印日志#xff0c;为什么还需要规范…1. 规范了日志的打印格式
2. 增加了彩色日志输出
3. 支持异步推送kafka
4. 日志文件压缩功能 我们无需关心 Logback 版本只需关注 Boot 版本即可Parent 工程自动集成了 Logback。Springboot 本身就可以打印日志为什么还需要规范日志 日志统一方便查阅管理。 日志归档功能。 日志持久化功能。 分布式日志查看功能ELK方便搜索和查阅。 关于 Logback 的介绍就略过了下面进入代码阶段。本文主要有以下几个功能 重新规定日志输出格式。 自定义指定包下的日志输出级别。 按模块输出日志。 日志异步推送 Kafka
POM 文件 如果需要将日志持久化到磁盘则引入如下两个依赖不需要推送 Kafka 也可以引入 propertieslogback-kafka-appender.version0.2.0-RC1/logback-kafka-appender.versionjanino.version2.7.8/janino.version
/properties
!-- 将日志输出到Kafka --
dependencygroupIdcom.github.danielwegener/groupIdartifactIdlogback-kafka-appender/artifactIdversion${logback-kafka-appender.version}/versionscoperuntime/scope
/dependency!-- 在xml中使用if condition的时候用到的jar包 --
dependencygroupIdorg.codehaus.janino/groupIdartifactIdjanino/artifactIdversion${janino.version}/version
/dependency
配置文件
在 项目中resource 文件夹下有三个配置文件 logback-defaults.xml logback-pattern.xml logback-spring.xml logback-spring.xml ?xml version1.0 encodingUTF-8?configurationinclude resourcelogging/logback-pattern.xml/include resourcelogging/logback-defaults.xml/
/configuration logback-defaults.xml ?xml version1.0 encodingUTF-8?included!-- spring日志 --property nameLOG_FILE value${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}/!-- 定义日志文件的输出路径 --property nameLOG_HOME value${LOG_PATH:-/tmp}/!--将日志追加到控制台默认使用LogBack已经实现好的进入文件其中logger用来设置某一个包或者具体的某一个类的日志打印级别--include resourceorg/springframework/boot/logging/logback/console-appender.xml/include resourcelogback-pattern.xml/!--定义日志文件大小 超过这个大小会压缩归档 --property nameINFO_MAX_FILE_SIZE value100MB/property nameERROR_MAX_FILE_SIZE value100MB/property nameTRACE_MAX_FILE_SIZE value100MB/property nameWARN_MAX_FILE_SIZE value100MB/!--定义日志文件最长保存时间 --property nameINFO_MAX_HISTORY value9/property nameERROR_MAX_HISTORY value9/property nameTRACE_MAX_HISTORY value9/property nameWARN_MAX_HISTORY value9/!--定义归档日志文件最大保存大小当所有归档日志大小超出定义时会触发删除 --property nameINFO_TOTAL_SIZE_CAP value5GB/property nameERROR_TOTAL_SIZE_CAP value5GB/property nameTRACE_TOTAL_SIZE_CAP value5GB/property nameWARN_TOTAL_SIZE_CAP value5GB/!-- 按照每天生成日志文件 --appender nameINFO_FILE classch.qos.logback.core.rolling.RollingFileAppender!-- 当前Log文件名 --file${LOG_HOME}/info.log/file!-- 压缩备份设置 --rollingPolicy classch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicyfileNamePattern${LOG_HOME}/backup/info/info.%d{yyyy-MM-dd}.%i.log.gz/fileNamePatternmaxHistory${INFO_MAX_HISTORY}/maxHistorymaxFileSize${INFO_MAX_FILE_SIZE}/maxFileSizetotalSizeCap${INFO_TOTAL_SIZE_CAP}/totalSizeCap/rollingPolicyencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern${FILE_LOG_PATTERN}/pattern/encoderfilter classch.qos.logback.classic.filter.ThresholdFilterlevelINFO/level/filter/appenderappender nameWARN_FILE classch.qos.logback.core.rolling.RollingFileAppender!-- 当前Log文件名 --file${LOG_HOME}/warn.log/file!-- 压缩备份设置 --rollingPolicy classch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicyfileNamePattern${LOG_HOME}/backup/warn/warn.%d{yyyy-MM-dd}.%i.log.gz/fileNamePatternmaxHistory${WARN_MAX_HISTORY}/maxHistorymaxFileSize${WARN_MAX_FILE_SIZE}/maxFileSizetotalSizeCap${WARN_TOTAL_SIZE_CAP}/totalSizeCap/rollingPolicyencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern${FILE_LOG_PATTERN}/pattern/encoderfilter classch.qos.logback.classic.filter.LevelFilterlevelWARN/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filter/appenderappender nameERROR_FILE classch.qos.logback.core.rolling.RollingFileAppender!-- 当前Log文件名 --file${LOG_HOME}/error.log/file!-- 压缩备份设置 --rollingPolicy classch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicyfileNamePattern${LOG_HOME}/backup/error/error.%d{yyyy-MM-dd}.%i.log.gz/fileNamePatternmaxHistory${ERROR_MAX_HISTORY}/maxHistorymaxFileSize${ERROR_MAX_FILE_SIZE}/maxFileSizetotalSizeCap${ERROR_TOTAL_SIZE_CAP}/totalSizeCap/rollingPolicyencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern${FILE_LOG_PATTERN}/pattern/encoderfilter classch.qos.logback.classic.filter.LevelFilterlevelERROR/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filter/appender!-- Kafka的appender --appender nameKAFKA classcom.github.danielwegener.logback.kafka.KafkaAppenderencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern${FILE_LOG_PATTERN}/pattern/encodertopic${kafka_env}applog_${spring_application_name}/topickeyingStrategy classcom.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy /deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy /producerConfigbootstrap.servers${kafka_broker}/producerConfig!-- dont wait for a broker to ack the reception of a batch. --producerConfigacks0/producerConfig!-- wait up to 1000ms and collect log messages before sending them as a batch --producerConfiglinger.ms1000/producerConfig!-- even if the producer buffer runs full, do not block the application but start to drop messages --producerConfigmax.block.ms0/producerConfig!-- Optional parameter to use a fixed partition --partition8/partition/appenderappender nameKAFKA_ASYNC classch.qos.logback.classic.AsyncAppenderappender-ref refKAFKA //appenderroot levelINFOappender-ref refCONSOLE/appender-ref refINFO_FILE/appender-ref refWARN_FILE/appender-ref refERROR_FILE/if conditiontrue.equals(property(kafka_enabled))thenappender-ref refKAFKA_ASYNC//then/if/root/included 注意 上面的 partition8/partition 指的是将消息发送到哪个分区如果你主题的分区为 0~7那么会报错解决办法是要么去掉这个属性要么指定有效的分区。 HostNameKeyingStrategy 是用来指定 key 的生成策略我们知道 kafka 是根据 key 来判定将消息发送到哪个分区上的此种是根据主机名来判定这样带来的好处是每台服务器生成的日志都是在同一个分区上面从而保证了时间顺序。但默认的是 NoKeyKeyingStrategy会随机分配到各个分区上面这样带来的坏处是无法保证日志的时间顺序不推荐这样来记录日志。 logback-pattern.xml ?xml version1.0 encodingUTF-8?included!-- 日志展示规则比如彩色日志、异常日志等 --conversionRule conversionWordclr converterClassorg.springframework.boot.logging.logback.ColorConverter /conversionRule conversionWordwex converterClassorg.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter /conversionRule conversionWordwEx converterClassorg.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter /!-- 自定义日志展示规则 --conversionRule conversionWordip converterClasscom.ryan.utils.IPAddressConverter /conversionRule conversionWordmodule converterClasscom.ryan.utils.ModuleConverter /!-- 上下文属性 --springProperty scopecontext namespring_application_name sourcespring.application.name /springProperty scopecontext nameserver_port sourceserver.port /!-- Kafka属性配置 --springProperty scopecontext namespring_application_name sourcespring.application.name /springProperty scopecontext namekafka_enabled sourceryan.web.logging.kafka.enabled/springProperty scopecontext namekafka_broker sourceryan.web.logging.kafka.broker/springProperty scopecontext namekafka_env sourceryan.web.logging.kafka.env/!-- 日志输出的格式如下 --!-- appID | module | dateTime | level | requestID | traceID | requestIP | userIP | serverIP | serverPort | processID | thread | location | detailInfo--!-- CONSOLE_LOG_PATTERN属性会在console-appender.xml文件中引用 --property nameCONSOLE_LOG_PATTERN value%clr(${spring_application_name}){cyan}|%clr(%module){blue}|%clr(%d{ISO8601}){faint}|%clr(%p)|%X{requestId}|%X{X-B3-TraceId:-}|%X{requestIp}|%X{userIp}|%ip|${server_port}|${PID}|%clr(%t){faint}|%clr(%.40logger{39}){cyan}.%clr(%method){cyan}:%L|%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}/!-- FILE_LOG_PATTERN属性会在logback-defaults.xml文件中引用 --property nameFILE_LOG_PATTERN value${spring_application_name}|%module|%d{ISO8601}|%p|%X{requestId}|%X{X-B3-TraceId:-}|%X{requestIp}|%X{userIp}|%ip|${server_port}|${PID}|%t|%.40logger{39}.%method:%L|%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}/!--将 org/springframework/boot/logging/logback/defaults.xml 文件下的默认logger写过来--logger nameorg.apache.catalina.startup.DigesterFactory levelERROR/logger nameorg.apache.catalina.util.LifecycleBase levelERROR/logger nameorg.apache.coyote.http11.Http11NioProtocol levelWARN/logger nameorg.apache.sshd.common.util.SecurityUtils levelWARN/logger nameorg.apache.tomcat.util.net.NioSelectorPool levelWARN/logger nameorg.eclipse.jetty.util.component.AbstractLifeCycle levelERROR/logger nameorg.hibernate.validator.internal.util.Version levelWARN//included 自定义获取 moudle /*** bFunction: /b 获取日志模块名称** program: ModuleConverter * Package: com.kingbal.king.dmp* author: songjianlin* date: 2024/04/17* version: 1.0* Copyright: 2024 www.kingbal.com Inc. All rights reserved.*/
public class ModuleConverter extends ClassicConverter {private static final int MAX_LENGTH 20;Overridepublic String convert(ILoggingEvent event) {if (event.getLoggerName().length() MAX_LENGTH) {return ;} else {return event.getLoggerName();}}
} 自定义获取 ip /*** bFunction: /b 获取ip地支** program: IPAddressConverter * Package: com.kingbal.king.dmp* author: songjianlin* date: 2024/04/17* version: 1.0* Copyright: 2024 www.kingbal.com Inc. All rights reserved.*/
Slf4j
public class IPAddressConverter extends ClassicConverter {private static String ipAddress;static {try {ipAddress InetAddress.getLocalHost().getHostAddress();} catch (UnknownHostException e) {log.error(fetch localhost host address failed, e);ipAddress UNKNOWN;}}Overridepublic String convert(ILoggingEvent event) {return ipAddress;}
} 按模块输出 给 Slf4j 加上 topic。 /*** bFunction: /b todo** program: LogbackController* Package: com.kingbal.king.dmp* author: songjianlin* date: 2024/04/17* version: 1.0* Copyright: 2024 www.kingbal.com Inc. All rights reserved.*/
RestController
RequestMapping(value /portal)
Slf4j(topic LogbackController)
public class LogbackController {RequestMapping(value /gohome)public void m1() {log.info(buddy,we go home~);}}
自定义日志级别 如果想打印 SQL 语句需要将日志级别设置成 debug 级别。 logging.path /tmp
logging.level.com.ryan.trading.account.dao debug 推送 Kafka展开目录
ryan.web.logging.kafka.enabledtrue
#多个broker用英文逗号分隔
ryan.web.logging.kafka.broker127.0.0.1:9092
#创建Kafka的topic时使用
ryan.web.logging.kafka.envtest