Java内存不足异常

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1818704/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-12 22:57:12  来源:igfitidea点击:

Java out of memory Exception

javatomcatxml-parsingquartz-schedulerjdom

提问by Amit

I am running a Java Web Application in Tomcat. The application uses Quartz framework to schedule the cron job at regular intervals. This cron job involves parsing a 4+ MB xml file, which I am doing using JDOM API. The xml file contains around 3600 nodes to be parsed and consequently data to be updated in DB which I am doing it sequentially.
After parsing almost half of the file, my application throws a Out of Memory Exception. The stack trace of the same is :

我正在 Tomcat 中运行 Java Web 应用程序。该应用程序使用 Quartz 框架定期调度 cron 作业。此 cron 作业涉及解析 4 MB 以上的 xml 文件,我正在使用 JDOM API 进行解析。xml 文件包含大约 3600 个要解析的节点,因此要在我按顺序执行的 DB 中更新数据。
解析几乎一半的文件后,我的应用程序抛出内存不足异常。相同的堆栈跟踪是:

Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOfRange(Arrays.java:3210)
        at java.lang.String.<init>(String.java:216)
        at java.lang.StringBuffer.toString(StringBuffer.java:585)
        at org.netbeans.lib.profiler.server.ProfilerRuntimeMemory.traceVMObjectAlloc(ProfilerRuntimeMemory.java:170)
        at java.lang.Throwable.getStackTraceElement(Native Method)
        at java.lang.Throwable.getOurStackTrace(Throwable.java:590)
        at java.lang.Throwable.getStackTrace(Throwable.java:582)
        at org.apache.juli.logging.DirectJDKLog.log(DirectJDKLog.java:155)
        at org.apache.juli.logging.DirectJDKLog.error(DirectJDKLog.java:135)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1603)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590)
        at java.lang.Thread.run(Thread.java:619)
Exception in thread "*** JFluid Monitor thread ***" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:2760)
        at java.util.Arrays.copyOf(Arrays.java:2734)
        at java.util.Vector.ensureCapacityHelper(Vector.java:226)
        at java.util.Vector.add(Vector.java:728)
        at org.netbeans.lib.profiler.server.Monitors$SurvGenAndThreadsMonitor.updateSurvGenData(Monitors.java:230)
        at org.netbeans.lib.profiler.server.Monitors$SurvGenAndThreadsMonitor.run(Monitors.java:169)
Nov 30, 2009 2:22:05 PM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor processChildren
SEVERE: Exception invoking periodic operation:
java.lang.OutOfMemoryError: Java heap space
        at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:232)
        at java.lang.StringCoding.encode(StringCoding.java:272)
        at java.lang.String.getBytes(String.java:946)
        at java.io.UnixFileSystem.getLastModifiedTime(Native Method)
        at java.io.File.lastModified(File.java:826)
        at org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1175)
        at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1269)
        at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:296)
        at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:118)
        at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1337)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1601)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590)
        at java.lang.Thread.run(Thread.java:619)
ERROR [JobRunShell]: Job updateVendorData.quoteUpdate threw an unhandled Exception:
java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOfRange(Arrays.java:3210)
        at java.lang.String.<init>(String.java:216)
        at java.lang.StringBuffer.toString(StringBuffer.java:585)
        at org.apache.commons.dbcp.PoolingConnection$PStmtKey.hashCode(PoolingConnection.java:296)
        at java.util.HashMap.get(HashMap.java:300)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.decrementActiveCount(GenericKeyedObjectPool.java:1085)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.returnObject(GenericKeyedObjectPool.java:882)
        at org.apache.commons.dbcp.PoolablePreparedStatement.close(PoolablePreparedStatement.java:80)
        at org.apache.commons.dbcp.DelegatingStatement.close(DelegatingStatement.java:168)
        at com.netcore.smsapps.stock.db.CompanyDaoImpl.updateCompanyQuote(CompanyDaoImpl.java:173)
        at com.netcore.smsapps.stock.vendor.MyirisVendor.readScripQuotes(MyirisVendor.java:159)
        at com.netcore.smsapps.stock.update.StockUpdateData.execute(StockUpdateData.java:38)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
DEBUG [ExceptionHelper]: Detected JDK support for nested exceptions.
ERROR [ErrorLogger]: Job (updateVendorData.quoteUpdate threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception. [See nested exception: java.lang.OutOfMemoryError: Java heap space]
        at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOfRange(Arrays.java:3210)
        at java.lang.String.<init>(String.java:216)
        at java.lang.StringBuffer.toString(StringBuffer.java:585)
        at org.apache.commons.dbcp.PoolingConnection$PStmtKey.hashCode(PoolingConnection.java:296)
        at java.util.HashMap.get(HashMap.java:300)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.decrementActiveCount(GenericKeyedObjectPool.java:1085)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.returnObject(GenericKeyedObjectPool.java:882)
        at org.apache.commons.dbcp.PoolablePreparedStatement.close(PoolablePreparedStatement.java:80)
        at org.apache.commons.dbcp.DelegatingStatement.close(DelegatingStatement.java:168)
        at com.netcore.smsapps.stock.db.CompanyDaoImpl.updateCompanyQuote(CompanyDaoImpl.java:173)
        at com.netcore.smsapps.stock.vendor.MyirisVendor.readScripQuotes(MyirisVendor.java:159)
        at com.netcore.smsapps.stock.update.StockUpdateData.execute(StockUpdateData.java:38)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:207)

This causes even my tomcat to crash. Can you please help me in diagnosing the problem. I even have enabled profiling in the Netbeans for the same but it seems that even that crashed. I have kept the default memory allocated to Tomcat. Is there any memory leak taking place. My DB is postgres and JDK is 1.6.0_15.

这甚至会导致我的 tomcat 崩溃。你能帮我诊断问题吗?我什至在 Netbeans 中启用了相同的分析,但似乎甚至崩溃了。我保留了分配给 Tomcat 的默认内存。是否发生了内存泄漏。我的数据库是 postgres,JDK 是 1.6.0_15。

Thanks, Amit

谢谢,阿米特

回答by Rubens Farias

Everytime you use a DOM to parse a XML file, you'll load entire file into memory and DOM infrastructure will use about same size to handle it, so it'll consume about twice memory than your file size.

每次使用 DOM 解析 XML 文件时,都会将整个文件加载到内存中,DOM 基础架构将使用大致相同的大小来处理它,因此它消耗的内存大约是文件大小的两倍。

You'll need to use SAX, an event based parser. While this can be hard to understand it first time, it's a very memory effective, as it just keeps in memory current parsing node.

您需要使用 SAX,一种基于事件的解析器。虽然这可能很难第一次理解,但它是一种非常有效的记忆方法,因为它只保留在内存中的当前解析节点。

Seems Java have some SAX implementations, like StAX, I hope it helps.

似乎 Java 有一些 SAX 实现,比如StAX,我希望它有所帮助。

回答by lorenzog

Are you sure there are no recursive array copy somewhere, left there by mistake? Perhaps in different threads?

您确定某处没有递归数组副本,而是错误地留在那里吗?也许在不同的线程中?

回答by duffymo

I'll second that point about the file and the DOM taking up a great deal of memory. I also wonder when I see this:

我将就文件和 DOM 占用大量内存的这一点进行第二次说明。我也想知道什么时候看到这个:

ERROR [JobRunShell]: Job updateVendorData.quoteUpdate threw an unhandled Exception:  
    java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOfRange(Arrays.java:3210)

What's that copying doing? I wonder if there's something else bad going on in your code.

那个抄袭干什么?我想知道您的代码中是否还有其他不好的地方。

If you've gotten this far, it suggests that you've read the file and the DOM successfully and you're starting to write to the database. The file memory should already be reclaimed.

如果您已经到了这一步,则表明您已成功读取文件和 DOM,并且您正在开始写入数据库。文件内存应该已经被回收了。

I'd suggest looking at memory using VisualGCso you can see what's going on.

我建议使用VisualGC查看内存,以便您了解发生了什么。

回答by BalusC

Parsing XML is an fairly expensive task. The average DOM parser would already need at least fivetimes of the memory space as the XML document big is. You should take this fact into account as well. To ensure that there is no memory leak somewhere else which caused the memory shortage for the XML parser, you really need to run a profiler. Give it all more memory, double the available memory and profile it. When you've nailed the cause down and fixed the leak, then you can just fall back to the "default" memory and retest. Or if there is really no means of any leak, then just give it all a bit more memory than default so that it all suits.

解析 XML 是一项相当昂贵的任务。平均DOM解析器将已经至少需要5内存空间的次为XML文档有多大。你也应该考虑这个事实。为了确保其他地方没有内存泄漏导致 XML 解析器的内存不足,您确实需要运行分析器。给它更多的内存,将可用内存加倍并对其进行配置。当您确定原因并修复泄漏后,您就可以退回到“默认”内存并重新测试。或者,如果真的没有任何泄漏的方法,那么只需给它比默认值多一点的内存,以便它都适合。

You can also consider to use a more memory efficient XML parser instead, for example VTD-XML(homepage here, benchmarks here).

你也可以考虑使用更多的内存高效的XML解析器来代替,例如VTD-XML这里的主页基准这里)。

回答by Jason Gritman

Have you tried setting the max heap size bigger to see if the problem still occurs then? There may not even be a leak at all. It might just be that the default heap size (64m on Windows I think) is insufficient for this particular process.

您是否尝试将最大堆大小设置得更大以查看问题是否仍然存在?甚至可能根本没有泄漏。可能只是默认堆大小(我认为在 Windows 上为 64m)对于这个特定进程来说是不够的。

I find that I almost always need to give any application I'm running Tomcat more heap and perm gen space than the defaults or I'll run into out of memory problems. If you need help adjusting the memory settings take a look at this question.

我发现我几乎总是需要为我运行 Tomcat 的任何应用程序提供比默认值更多的堆和永久代空间,否则我会遇到内存不足的问题。如果您需要帮助调整内存设置,请查看此问题

回答by Matt Crinklaw-Vogt

You could run your application with: -XX:+HeapDumpOnOutOfMemoryError. This will cause the JVM to produce a heap dump when it runs out of memory. You can the use something like: MAT or JHAT to see what objects are being held on to. I suggest using the eclipse memory analyzer tool (MAT) on the generated heap dump as it is fairly straightforward to use: http://www.eclipse.org/mat/

您可以使用以下命令运行您的应用程序:-XX:+HeapDumpOnOutOfMemoryError。这将导致 JVM 在内存不足时产生堆转储。您可以使用诸如:MAT 或 JHAT 之类的东西来查看被持有的对象。我建议在生成的堆转储上使用 eclipse 内存分析器工具 (MAT),因为它使用起来相当简单:http: //www.eclipse.org/mat/

Of course you will need to have some idea as to what objects may be hanging around in order for this to be useful. DOM objects? Resources from previous loads of xml documents? Database connections? MAT will allow you to trace the references back to a root object from some object that you suspect should have been garbage collected.

当然,您需要对哪些对象可能挂在周围有一些想法,以便它有用。DOM 对象?来自先前加载的 xml 文档的资源?数据库连接?MAT 将允许您从某个您怀疑应该被垃圾回收的对象中跟踪对根对象的引用。

回答by user1374131

Try increasing the ram allocation for your JVM. It should help.

尝试增加 JVM 的 ram 分配。它应该有帮助。

Fix for eclipse: You can configure this in eclipse preference as follows

修复 eclipse:您可以在 eclipse 首选项中进行如下配置

  1. Windows -> Preferences ( on mac its: eclipse ->preferences)
  2. Java -> Installed JREs
  3. Select the JRE and click on Edit
  4. on the default VM arguments field type in -Xmx1024M. (or your memory preference,for 1 gb of ram its 1024)
  5. Click on finish or OK.
  1. Windows -> Preferences(在 Mac 上为:eclipse ->preferences)
  2. Java -> 已安装的 JRE
  3. 选择 JRE 并单击编辑
  4. 在 -Xmx1024M 中的默认 VM 参数字段中键入。(或您的内存偏好,对于 1 GB 的 ram 为 1024)
  5. 单击完成或确定。

回答by Sujith PS

You have to allocate more space to the PermGenSpace of the tomcat JVM.

您必须为 tomcat JVM 的 PermGenSpace 分配更多空间。

This can be done with the JVM argument : -XX:MaxPermSize=128m

这可以通过 JVM 参数来完成: -XX:MaxPermSize=128m

By default, the PermGen space is 64M (and it contains all compiled classes, so if you have a lot of jar (classes) in your classpath, you may indeed fill this space).

默认情况下,永久代空间为 64M(并且它包含所有已编译的类,因此如果您的类路径中有很多 jar(类),您可能确实会填满这个空间)。

On a side note, you can monitor the size of the PermGen space with JVisualVMand you can even inspect its content with YourKit Java Profiler

附带说明一下,您可以使用JVisualVM监控 PermGen 空间的大小,甚至可以使用YourKit Java Profiler检查其内容

回答by Manjush

Try increasing the ram allocation for your JVM. It should help.

尝试增加 JVM 的 ram 分配。它应该有帮助。

Fix for eclipse: You can configure this in eclipse preference as follows

修复 eclipse:您可以在 eclipse 首选项中进行如下配置

Windows -> Preferences ( on mac its: eclipse ->preferences) Java -> Installed JREs Select the JRE and click on Edit on the default VM arguments field type in --Xms256m -Xmx512m -XX:MaxPermSize=512m -XX:PermSize=128m. (or your memory preference,for 1 gb of ram its 1024) Click on finish or OK.

Windows -> Preferences(在 Mac 上为:eclipse ->preferences) Java -> Installed JREs 选择 JRE 并点击 Edit 在默认 VM 参数字段中输入 --Xms256m -Xmx512m -XX:MaxPermSize=512m -XX:PermSize= 128m。(或您的内存偏好,对于 1 GB 的 ram 其 1024)单击完成或确定。