java 在log4j中使用MDC动态命名日志文件
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/7992473/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Using MDC in log4j to dynamically name the log file
提问by Shamis Shukoor
Is it possible some how to use MDC to name the log file at run time.
是否有可能在运行时使用 MDC 命名日志文件。
I have a single web application which is being called by different names at the same time using tomcat docbase. So i need to have separate log files for each of them.
我有一个 Web 应用程序,它使用 tomcat docbase 同时被不同的名称调用。所以我需要为他们每个人都有单独的日志文件。
回答by Dan
This can be accomplished in Logback, the successor to Log4J.
这可以在Logback中完成,Log4J 的继承者。
Logback is intended as a successor to the popular log4j project, picking up where log4j leaves off.
Logback 旨在作为流行的 log4j 项目的继承者,从 log4j 停止的地方开始。
See the documentation for Sifting Appender
请参阅筛选 Appender的文档
The SiftingAppender is unique in its capacity to reference and configure nested appenders. In the above example, within the SiftingAppender there will be nested FileAppender instances, each instance identified by the value associated with the "userid" MDC key. Whenever the "userid" MDC key is assigned a new value, a new FileAppender instance will be built from scratch. The SiftingAppender keeps track of the appenders it creates. Appenders unused for 30 minutes will be automatically closed and discarded.
SiftingAppender 在引用和配置嵌套 appender 方面是独一无二的。在上面的示例中,在 SiftingAppender 中将有嵌套的 FileAppender 实例,每个实例由与“userid”MDC 键关联的值标识。每当“userid”MDC 键被分配一个新值时,就会从头开始构建一个新的 FileAppender 实例。SiftingAppender 会跟踪它创建的 appender。未使用 30 分钟的 Appender 将自动关闭并丢弃。
In the example, they generate a separate log file for each user based on an MDC value. Other MDC values could be used depending on your needs.
在示例中,他们根据 MDC 值为每个用户生成单独的日志文件。可以根据您的需要使用其他 MDC 值。
回答by Wolfgang
This is also possible with log4j. You can do this by implementing your own appender. I guess the easiest way is to subclass AppenderSkeleton.
这对于 log4j 也是可能的。您可以通过实现自己的 appender 来做到这一点。我想最简单的方法是子类AppenderSkeleton。
All logging events end up in the append(LoggingEvent event)
method you have to implement.
所有日志事件append(LoggingEvent event)
都以您必须实现的方法结束。
In that method you can access the MDC by event.getMDC("nameOfTheKeyToLookFor");
在这种方法中,您可以通过以下方式访问 MDC event.getMDC("nameOfTheKeyToLookFor");
Then you could use this information to open the file to write to. It may be helpful to have a look at the implementation of the standard appenders like RollingFileAppenderto figure out the rest.
然后您可以使用此信息打开要写入的文件。查看标准附加程序(如RollingFileAppender)的实现以找出其余部分可能会有所帮助。
I used this approach myself in an application to separate the logs of different threads into different log files and it worked very well.
我自己在一个应用程序中使用这种方法将不同线程的日志分离到不同的日志文件中,并且效果很好。
回答by bpodgursky
I struggled for a while to find SiftingAppender-like functionality in log4j (we couldn't switch to logback because of some dependencies), and ended up with a programmatic solution that works pretty well, using an MDC and appending loggers at runtime:
我努力在 log4j 中找到类似 SiftingAppender 的功能(由于某些依赖性,我们无法切换到 logback),最终得到了一个运行良好的编程解决方案,使用 MDC 并在运行时附加记录器:
// this can be any thread-specific string
String processID = request.getProcessID();
Logger logger = Logger.getRootLogger();
// append a new file logger if no logger exists for this tag
if(logger.getAppender(processID) == null){
try{
String pattern = "%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n";
String logfile = "log/"+processID+".log";
FileAppender fileAppender = new FileAppender(
new PatternLayout(pattern), logfile, true);
fileAppender.setName(processID);
// add a filter so we can ignore any logs from other threads
fileAppender.addFilter(new ProcessIDFilter(processID));
logger.addAppender(fileAppender);
}catch(Exception e){
throw new RuntimeException(e);
}
}
// tag all child threads with this process-id so we can separate out log output
MDC.put("process-id", processID);
//whatever you want to do in the thread
LOG.info("This message will only end up in "+processID+".log!");
MDC.remove("process-id");
The filter appended above just checks for a specific process id:
上面附加的过滤器只检查特定的进程 ID:
public class RunIdFilter extends Filter {
private final String runId;
public RunIdFilter(String runId) {
this.runId = runId;
}
@Override
public int decide(LoggingEvent event) {
Object mdc = event.getMDC("run-id");
if (runId.equals(mdc)) {
return Filter.ACCEPT;
}
return Filter.DENY;
}
}
Hope this helps a bit.
希望这个对你有帮助。
回答by gerardnico
As of 20-01-2020, this is now a default functionality of Log4j.
从 20-01-2020 开始,这现在是 Log4j 的默认功能。
To achieve that you just need to use a RoutingAppenderwith MDC.
要实现这一点,您只需要使用带有 MDC的RoutingAppender。
Example:
例子:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" monitorInterval="30">
<Appenders>
<Routing name="Analytics" ignoreExceptions="false">
<Routes>
<Script name="RoutingInit" language="JavaScript"><![CDATA[
// This script must return a route name
//
// Example from https://logging.apache.org/log4j/2.x/manual/appenders.html#RoutingAppender
// on how to get a MDC value
// logEvent.getContextMap().get("event_type");
//
// but as we use only one route with dynamic name, we return 1
1
]]>
</Script>
<Route>
<RollingFile
name="analytics-${ctx:event_type}"
fileName="logs/analytics/${ctx:event_type}.jsonl"
filePattern="logs/analytics/$${date:yyyy-MM}/analytics-${ctx:event_type}-%d{yyyy-dd-MM-}-%i.jsonl.gz">
<PatternLayout>
<pattern>%m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingFile>
</Route>
</Routes>
<!-- Created appender TTL -->
<IdlePurgePolicy timeToLive="15" timeUnit="minutes"/>
</Routing>
</Appenders>
<Loggers>
<Logger name="net.bytle.api.http.AnalyticsLogger" level="debug" additivity="false">
<AppenderRef ref="Analytics"/>
</Logger>
</Loggers>
</Configuration>
To known more, see Log4j - How to route message to log file created dynamically.
要了解更多信息,请参阅Log4j - 如何将消息路由到动态创建的日志文件。