集中式 Java 日志记录
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/11100760/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Centralised Java Logging
提问by Sebastian van Wickern
I'm looking for a way to centralise the logging concerns of distributed software (written in Java) which would be quite easy, since the system in question has only one server. But keeping in mind, that it is very likely that more instances of the particular server will run in the future (and there are going to be more application's in need for this), there would have to be something like a Logging-Server, which takes care of incoming logs and makes them accessable for the support-team.
我正在寻找一种方法来集中分布式软件(用 Java 编写)的日志记录问题,这很容易,因为所讨论的系统只有一台服务器。但是请记住,将来很可能会运行更多特定服务器的实例(并且会有更多的应用程序需要这样做),因此必须有类似 Logging-Server 之类的东西处理传入的日志并使支持团队可以访问它们。
The situation right now is, that several java-applications use log4j which writes it's data to local files, so if a client expiriences problems the support-team has to ask for the logs, which isn't always easy and takes a lot of time. In the case of a server-fault the diagnosis-problem is not as big, since there is remote-access anyways, but even though, monitoring everything through a Logging-Server would still make a lot of sense.
现在的情况是,几个 java 应用程序使用 log4j 将其数据写入本地文件,因此如果客户遇到问题,支持团队必须要求提供日志,这并不总是那么容易并且需要很多时间. 在服务器故障的情况下,诊断问题没有那么大,因为无论如何都有远程访问,但即使如此,通过日志服务器监控一切仍然很有意义。
While I went through the questions regarding "centralised logging" I found another Question(actually the only one with a (in this case) useable answer. Problem being, all applications are running in a closed environment (within one network) and security-guidelines do not permit for anything concerning internal software to go out of the environments network.
当我浏览有关“集中式日志记录”的问题时,我发现了另一个问题(实际上是唯一一个有(在这种情况下)可用答案的问题。问题是,所有应用程序都在封闭环境(在一个网络内)和安全准则中运行不允许任何有关内部软件的内容离开环境网络。
I also found a wonderful article about how one would implementsuch a Logging-Server. Since the article was written in 2001, I would have thought that someone might have already solved this particular problem. But my search-results came up with nothing.
我还发现了一篇关于如何实现这样的日志服务器的精彩文章。由于这篇文章是在 2001 年写的,我会认为有人可能已经解决了这个特定的问题。但我的搜索结果一无所获。
My Question: Is there a logging-framework which handle's logging over networks with a centralised server which can be accessed by the support-team?
我的问题:是否有一个日志框架可以通过支持团队访问的集中式服务器处理网络上的日志记录?
Specification:
规格:
- Availability
- Server has to be run by us.
- Java 1.5 compatibility
- Compatibility to a heterogeneous network.
- Best-Case: Protocol uses HTTP to send logs (to avoid firewall-issues)
- Best-Case: Uses log4j or LogBack or basically anything that implements slf4j
- 可用性
- 服务器必须由我们运行。
- Java 1.5 兼容性
- 对异构网络的兼容性。
- 最佳情况:协议使用 HTTP 发送日志(以避免防火墙问题)
- 最佳情况:使用 log4j 或 LogBack 或基本上任何实现 slf4j 的东西
Not necessary, but nice to have
没有必要,但很高兴拥有
- Authentication and security is of course an issue, but could be set back for at least a while (if it is open-software we would extend it to our needs OT: we always give back to the projects).
- Data mining and analysis is something which is very helpful to make software better, but that could as well be an external application.
- 身份验证和安全性当然是一个问题,但至少可以推迟一段时间(如果它是开放软件,我们会将其扩展到我们的需要OT:我们总是回馈项目)。
- 数据挖掘和分析对于改进软件非常有帮助,但这也可以是外部应用程序。
My worst-case scenario is that their is no software like that. For that case, we would probably implement this ourselves. But if there is such a Client-Server Application I would very much appreciate not needing to do this particularly problematic bit of work.
我最糟糕的情况是他们没有那样的软件。对于这种情况,我们可能会自己实现。但是如果有这样一个客户端-服务器应用程序,我将非常感激不需要做这个特别有问题的工作。
Thanks in advance
提前致谢
Update:The solution has to run on several java-enabled platforms. (Mostly Windows, Linux, some HP Unix)
更新:该解决方案必须在多个支持 Java 的平台上运行。(主要是 Windows、Linux、一些 HP Unix)
Update:After a lot more research we actually found a solution we were able to acquire. clusterlog.net(offline since at least mid-2015) provides logging services for distributed software and is compatible to log4j and logback (which is compatible to slf4j). It lets us analyze every single users way through the application. Thus making it very easy to reproduce reported bugs (or even non reported ones). It also notifies us of important events by email and has a report system were logs of the same origin are summorized into an easily accessable format. They deployed (which was flawless) it here just a couple of days ago and it is running great.
更新:经过大量研究,我们实际上找到了我们能够获得的解决方案。clusterlog.net(至少从 2015 年年中开始离线)为分布式软件提供日志服务,并且兼容 log4j 和 logback(兼容 slf4j)。它让我们可以通过应用程序分析每个用户。因此可以很容易地重现报告的错误(甚至未报告的错误)。它还通过电子邮件通知我们重要事件,并有一个报告系统,将相同来源的日志汇总为易于访问的格式。几天前,他们在这里部署了(完美无瑕)它,并且运行良好。
Update (2016): this question still gets a lot of traffic, but the site I referred to does not exist anymore.
更新(2016):这个问题仍然有很多流量,但我提到的网站已经不存在了。
采纳答案by Arcadien
You can use Log4j with the SocketAppender, thus you have to write the server part as LogEvent processing. see http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SocketAppender.html
您可以将 Log4j 与 SocketAppender 一起使用,因此您必须将服务器部分编写为 LogEvent 处理。见http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SocketAppender.html
回答by hB0
NXLOGor LogStashor Graylogs2
NXLOG或LogStash或Graylogs2
or
或者
LogStash + ElasticSearch (+optionally Kibana)
LogStash + ElasticSearch(+ 可选 Kibana)
Example:
例子:
1) http://logstash.net/docs/1.3.3/tutorials/getting-started-simple
1) http://logstash.net/docs/1.3.3/tutorials/getting-started-simple
2) http://logstash.net/docs/1.3.3/tutorials/getting-started-centralized
2) http://logstash.net/docs/1.3.3/tutorials/getting-started-centralized
回答by Frank
Have a look at logFaces, looks like your specifications are met. http://www.moonlit-software.com/
看看 logFaces,看起来你的规格得到满足。 http://www.moonlit-software.com/
- Availability (check)
- Server has to be run by us. (check)
- Java 1.5 compatibility (check)
- Compatibility to a heterogeneous network. (check)
- Best-Case: Protocol uses HTTP to send logs (to avoid firewall-issues) (almost TCP/UDP)
- Best-Case: Uses log4j or LogBack or basically anything that implements slf4j (check)
- Authentication (check)
- Data mining and analysis (possible through extension api)
- 可用性(检查)
- 服务器必须由我们运行。(查看)
- Java 1.5 兼容性(检查)
- 对异构网络的兼容性。(查看)
- 最佳情况:协议使用 HTTP 发送日志(以避免防火墙问题)(几乎是 TCP/UDP)
- 最佳情况:使用 log4j 或 LogBack 或基本上任何实现 slf4j 的东西(检查)
- 身份验证(检查)
- 数据挖掘和分析(可能通过扩展api)
回答by Andrew Андрей Листочкин
There's a ready-to-use solution from Facebook - Scribe- that is using Apache Hadoop under the hood. However, most companies I'm aware of still tend to develop in-house systems for that. I worked in one such company and dealt with logs there about two years ago. We also used Hadoop. In our case we had the following setup:
Facebook 有一个现成的解决方案——Scribe——它在底层使用 Apache Hadoop。然而,我所知道的大多数公司仍然倾向于为此开发内部系统。大约两年前,我在一家这样的公司工作并在那里处理日志。我们还使用了 Hadoop。在我们的例子中,我们有以下设置:
- We had a small dedicated cluster of machines for log aggregation.
- Workers mined logs from production service and then parse individual lines.
- Then reducers would aggregate the necessary data and prepare reports.
- 我们有一个小型的专用机器集群用于日志聚合。
- 工人从生产服务中挖掘日志,然后解析各个行。
- 然后,reducers 将汇总必要的数据并准备报告。
We had a small and fixed number of reports that we were interested in. In rare cases when we wanted to perform a different kind of analysis we would simply add a specialized reducer code for that and optionally run it against old logs.
我们有少量且固定数量的感兴趣的报告。在极少数情况下,当我们想要执行不同类型的分析时,我们只需为此添加专门的 reducer 代码,并有选择地针对旧日志运行它。
If you can't decide what kind of analyses you are interested in in advance then it'll be better to store structured data prepared by workers in HBase or some other NoSQL database (here, for example, people use Mongo DB). That way you won't need to re-aggregate data from the raw logs and will be able to query the datastore instead.
如果你不能提前决定你对哪种分析感兴趣,那么最好将工作人员准备的结构化数据存储在 HBase 或其他一些 NoSQL 数据库中(例如,人们使用 Mongo DB)。这样你就不需要从原始日志中重新聚合数据,而是能够查询数据存储。
There are a number of good articles about such logging aggregation solutions, for example, using Pig to query the aggregated data. Piglets you query large Hadoop-based datasets with SQL-like queries.
关于这种日志聚合解决方案,有很多不错的文章,例如使用Pig查询聚合数据。Pig允许您使用类似 SQL 的查询来查询基于 Hadoop 的大型数据集。