Java OutOfMemory 异常:加载 zip 文件时出现 mmap 错误
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/12815309/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Java OutOfMemory exception: mmap error on loading zip file
提问by Darya Dmitrichenko
I run my app on production env (rhel 5.2 x64, oracle jre 1.7_05, tomcat 7.0.28) with JVM arguments:
我使用 JVM 参数在生产环境(rhel 5.2 x64、oracle jre 1.7_05、tomcat 7.0.28)上运行我的应用程序:
-Xms8192m -Xmx8192m -XX:MaxPermSize=1024m
-Doracle.net.tns_admin=/var/ora_net -XX:ReservedCodeCacheSize=512m -XX:+AggressiveOpts -XX:+UseFastAccessorMethods
-XX:+UseStringCache -XX:+OptimizeStringConcat -XX:+UseCompressedOops -XX:+UseG1GC -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9026 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
After several time i've got stack trace like that:
几次后,我得到了这样的堆栈跟踪:
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed.
mmap failed for CEN and END part of zip file
[...]
Caused by: java.lang.OutOfMemoryError: null
at java.util.zip.ZipFile.$$YJP$$open(Native Method) ~[na:1.7.0_05]
at java.util.zip.ZipFile.open(Unknown Source) ~[na:1.7.0_05]
at java.util.zip.ZipFile.<init>(Unknown Source) ~[na:1.7.0_05]
at java.util.zip.ZipFile.<init>(Unknown Source) ~[na:1.7.0_05]
at java.util.jar.JarFile.<init>(Unknown Source) ~[na:1.7.0_05]
at java.util.jar.JarFile.<init>(Unknown Source) ~[na:1.7.0_05]
at sun.net.www.protocol.jar.URLJarFile.<init>(Unknown Source) ~[na:1.7.0_05]
at sun.net.www.protocol.jar.URLJarFile.getJarFile(Unknown Source) ~[na:1.7.0_05]
at sun.net.www.protocol.jar.JarFileFactory.get(Unknown Source) ~[na:1.7.0_05]
at sun.net.www.protocol.jar.JarURLConnection.connect(Unknown Source) ~[na:1.7.0_05]
at sun.net.www.protocol.jar.JarURLConnection.getInputStream(Unknown Source) ~[na:1.7.0_05]
at java.net.URL.openStream(Unknown Source) ~[na:1.7.0_05]
at org.apache.catalina.loader.WebappClassLoader.findLoadedResource(WebappClassLoader.java:3279) ~[na:na]
at org.apache.catalina.loader.WebappClassLoader.getResourceAsStream(WebappClassLoader.java:1478) ~[na:na]
at org.apache.http.util.VersionInfo.loadVersionInfo(VersionInfo.java:242) ~[httpcore-4.2.jar:4.2]
at org.apache.http.impl.client.DefaultHttpClient.setDefaultHttpParams(DefaultHttpClient.java:180) ~[httpclient-4.2.jar:4.2]
at org.apache.http.impl.client.DefaultHttpClient.createHttpParams(DefaultHttpClient.java:158) ~[httpclient-4.2.jar:4.2]
at org.apache.http.impl.client.AbstractHttpClient.getParams(AbstractHttpClient.java:448) ~[httpclient-4.2.jar:4.2]
Looking to my profiler - everthing is ok (heap and non-heap memory used for 10%) and i have no idea where is the problem.
查看我的分析器 - 一切正常(堆和非堆内存使用了 10%),我不知道问题出在哪里。
This problem's happening every day at same time and it's not connected to application uptime. What is cause it problem?
这个问题每天都在同一时间发生,它与应用程序正常运行时间无关。是什么原因造成的问题?
Edited:
编辑:
New output in log file:
日志文件中的新输出:
Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled.
Java HotSpot(TM) 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
Code Cache [0x00002aaaab790000, 0x00002aaaad240000, 0x00002aaacb790000)
total_blobs=4223 nmethods=3457 adapters=707 free_code_cache=497085Kb largest_free_block=508887936
But i have enough memory: http://i.stack.imgur.com/K8VMx.jpg
但我有足够的内存:http: //i.stack.imgur.com/K8VMx.jpg
Answer:Problem in java version. It descripted here: https://forums.oracle.com/forums/thread.jspa?messageID=10369413
答:java版本问题。它在这里描述:https: //forums.oracle.com/forums/thread.jspa?messageID=10369413
采纳答案by Peter Lawrey
I have seen these error before when running out of resources such as running out of swap space or running out of allowed memory mapping. Have a look at sudo cat /proc/$PID/maps | wc -l
compared with cat /proc/sys/vm/max_map_count
在耗尽资源(例如耗尽交换空间或耗尽允许的内存映射)之前,我曾见过这些错误。看看sudo cat /proc/$PID/maps | wc -l
对比cat /proc/sys/vm/max_map_count
See comments below.
请参阅下面的评论。
I also suggested ....
我也建议....
You appear to have run into a bug with YourKit. What version are you using?
您似乎遇到了 YourKit 的错误。你用的是什么版本?
I would cut down most of your options as they either are the default and don't do anything or could be complicating matters.
我会减少你的大部分选择,因为它们要么是默认选项,要么不做任何事情,要么可能会使问题复杂化。
-mx8g -XX:MaxPermSize=1g -Doracle.net.tns_admin=/var/ora_net
-XX:ReservedCodeCacheSize=512m -XX:+UseG1GC -Dcom.sun.management.jmxremote.port=9026
I would try dropping -XX:+UseG1GC
as well as this is a relatively new collector and shouldn't change your results.
我会尝试删除-XX:+UseG1GC
,因为这是一个相对较新的收集器,不应该改变您的结果。
回答by titogeo
Try these options
试试这些选项
-Xrunhprof:heap=all,depth=12,cutoff=0
This will generate a dump file in the application root. Later you can analyse with HP Jmeter. This will give a snap shot of what happened to your 8Gigs of memory. You can see HP JMeter manuals here.
这将在应用程序根目录中生成一个转储文件。稍后您可以使用HP Jmeter进行分析。这将为您的 8Gigs 内存提供快照。您可以在此处查看 HP JMeter 手册。
Also chose your Xrunhprof options wisely. The above option i mentioned would generate huge a dump file. From manuals you can find suitable options.
还要明智地选择您的 Xrunhprof 选项。我提到的上述选项会生成一个巨大的转储文件。您可以从手册中找到合适的选项。
回答by ggrandes
Some paragraphs of the original blog article, this explains how java jar/zip works:
原始博客文章的一些段落,这解释了 java jar/zip 是如何工作的:
The OOM error is triggered during a native call (ZipFile.open(Native Method)) from the Java JDK ZipFile to load our application EAR file. This native JVM operation requires proper native memory and virtual address space available in order to execute its loading operation. The conclusion at this point was that our Java VM 1.5 was running out of native memory / virtual address space at deployment time.
Sun Java VM native memory and MMAP files
When using JDK 1.4 / 1.5, any JAR / ZIP file loaded by the Java VM get mapped entirely into an address space. This means that the more EAR / JAR files you are loading to a single JVM, the higher is the native memory footprint of your Java process.
This also means that the higher is your Java Heap and PermGen space; the lower memory is left for your native memory spaces such as C-Heap and MMAP Files which can definitely be a problem if you are deploying too many separate applications (EAR files) to a single 32-bit Java process.
Please note that Sun came up with improvements in JDK 1.6 (Mustang) and changed the behaviour so that the JAR file's central directory is still mapped, but the entries themselves are read separately; reducing the native memory requirement.
I suggest that you review the Sun Bug Id link below for more detail on such JDK 1.4 / 1.5 limitation. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6280693
OOM 错误是在 Java JDK ZipFile 的本机调用 (ZipFile.open(Native Method)) 期间触发的,以加载我们的应用程序 EAR 文件。此本地 JVM 操作需要适当的本地内存和可用的虚拟地址空间才能执行其加载操作。此时的结论是,我们的 Java VM 1.5 在部署时耗尽了本机内存/虚拟地址空间。
Sun Java VM 本机内存和 MMAP 文件
使用 JDK 1.4 / 1.5 时,Java VM 加载的任何 JAR / ZIP 文件都会完全映射到地址空间。这意味着加载到单个 JVM 的 EAR/JAR 文件越多,Java 进程的本机内存占用就越高。
这也意味着 Java Heap 和 PermGen 空间越大;较低的内存留给您的本机内存空间,例如 C-Heap 和 MMAP 文件,如果您将太多单独的应用程序(EAR 文件)部署到单个 32 位 Java 进程,这肯定会成为一个问题。
请注意,Sun 在 JDK 1.6 (Mustang) 中提出了改进并更改了行为,以便 JAR 文件的中央目录仍然被映射,但条目本身是单独读取的;减少本机内存需求。
我建议您查看下面的 Sun Bug Id 链接,了解有关此类 JDK 1.4 / 1.5 限制的更多详细信息。 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6280693
回答by Davut Gürbüz
Not sure what's changed in Java 1.7 as I remember from Java 1.6 we use Xms options as below.
不确定 Java 1.7 中发生了什么变化,因为我记得从 Java 1.6 开始我们使用 Xms 选项,如下所示。
-Xms=512m -Xmx=512m