java.lang.OutOfMemoryError:Java 堆空间

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2336793/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-29 20:40:21  来源:igfitidea点击:

java.lang.OutOfMemoryError: Java heap space

javaservlets

提问by Terman

I've the error described below in the trace, when I try upload an 80,193KB FITS file for processing, in order to display select fields. Basically I have a mock web interface that allows a user to select up to 6 FITS files for uploading and processing. I don't get an error when I upload two [different] FITS files, roughly 54,574KB each, at a go for processing. The fields are displayed/printed on console. However on uploading a single 80,193KB file I get the error below. How do I resolve it?

当我尝试上传 80,193KB FITS 文件进行处理以显示选择字段时,我在跟踪中遇到了下面描述的错误。基本上,我有一个模拟 Web 界面,允许用户选择最多 6 个 FITS 文件进行上传和处理。当我上传两个 [不同的] FITS 文件(每个大约 54,574KB)进行处理时,我没有收到错误消息。这些字段在控制台上显示/打印。但是,在上传单个 80,193KB 文件时,出现以下错误。我该如何解决?

I initially thought that the iteration was being computationally expensive but I suspect it occcurs on invoking the readHDU for the 80MB file:

我最初认为迭代的计算成本很高,但我怀疑它发生在为 80MB 文件调用 readHDU 时:

while ((newBasicHDU = newFits.readHDU()) != null) { 

How do I resolve efficiently resolve it? I'm running the program on Windows 7. Cheers

我如何有效地解决它?我正在 Windows 7 上运行该程序。干杯

Trace:

痕迹:

SEVERE: Servlet.service() for servlet FitsFileProcessorServlet threw exception
java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.multiNewArray(Native Method)
    at java.lang.reflect.Array.newInstance(Unknown Source)
    at nom.tam.util.ArrayFuncs.newInstance(ArrayFuncs.java:1028)
    at nom.tam.fits.ImageData.read(ImageData.java:258)
    at nom.tam.fits.Fits.readHDU(Fits.java:573)
    at controller.FITSFileProcessor.processFITSFile(FITSFileProcessor.java:79)
    at controller.FITSFileProcessor.doPost(FITSFileProcessor.java:53)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
    at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
    at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
    at java.lang.Thread.run(Unknown Source)

Code:

代码:

/**
     * 
     * @param
     * @return
     */
    public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException {

        // Check that we have a file upload request
        boolean isMultipart = ServletFileUpload.isMultipartContent(request);

        if (isMultipart) {

            Fits newFits = new Fits();
            BasicHDU newBasicHDU = null;
            ServletFileUpload upload = new ServletFileUpload();                     // Create a new file upload handler

            // Parse the request
            try {
                //List items = upload.parseRequest(request);                        // FileItem
                FileItemIterator iter = upload.getItemIterator(request);

                // iterate through the number of FITS FILES on the Server
                while (iter.hasNext()) {
                    FileItemStream item = (FileItemStream) iter.next();
                    if (!item.isFormField()) {
                        this.processFITSFile(item, newFits,newBasicHDU );
                    }
                }
            } catch (FileUploadException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }       
        }
    }

    /**
     * 
     * @param
     * @return
     */
    public void processFITSFile(FileItemStream item, Fits newFits, BasicHDU newBasicHDU) throws IOException {

        // Process the fits file
        if (!item.isFormField()) {
            String fileName = item.getName();                                       //name of the FITS File
            try {   
                System.out.println("Fits File Fields Printout: " +  fileName);
                InputStream fitsStream = item.openStream();                             
                newFits = new Fits(fitsStream);
                System.out.println( "number of hdu's if: " + newFits.getNumberOfHDUs());

                while ((newBasicHDU = newFits.readHDU()) != null)  {                //line 76
                    System.out.println("Telescope Used: " + newBasicHDU.getTelescope());
                    System.out.println("Author: " + newBasicHDU.getAuthor());
                    System.out.println("Observer: " + newBasicHDU.getObserver() );
                    System.out.println("Origin: " + newBasicHDU.getOrigin() );
                    System.out.println("End of Printout for: \n" + fileName);
                    System.out.println();               
                }

                fitsStream.close();

            } catch (Exception e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }       
    }

回答by chubbsondubs

Most of the answers have been centered around increasing your heap size from the default (which is 64MB). 64MB would explain why you are able to upload 54,574KB successfully because it's under that size, and I bet tomcat and your program don't take up much more than 10MB when booted up. Increasing your memory is a good idea, but it's really treating the symptoms not the disease.

大多数答案都集中在从默认值(64MB)增加堆大小。64MB 可以解释为什么你能够成功上传 54,574KB,因为它小于这个大小,我敢打赌 tomcat 和你的程序在启动时不会占用超过 10MB。提高记忆力是个好主意,但它实际上是在治疗症状而不是疾病。

If you plan on allowing multiple users to upload big files then two users uploading 80MB files at the same time will require 160MB. And, if 3 users do this you'll need 240MB, etc. Here is what I would suggest. Find a library that doesn't read this stuff into RAM before writing it to disk. That will make your app scale, and that's the real solution to this problem.

如果您计划允许多个用户上传大文件,那么两个用户同时上传 80MB 的文件将需要 160MB。而且,如果 3 个用户这样做,您将需要 240MB,等等。这是我的建议。在将这些内容写入磁盘之前,找到一个不会将这些内容读入 RAM 的库。这将使您的应用程序扩展,这是解决此问题的真正方法。

Use jconsole (comes with the JDK) to look at your heap size while the program is running. Jconsole is really easy to use and you don't have to configure your JVM to use it. So in that sense it's much easier than a profiler, but you won't get as much detail about your program. However, you can see the three portions of memory better (Eden, Survivor, and Tenured). Sometimes strange things can cause one of those areas to run out of memory even though you have allocated lots of memory to the JVM. JConsole will show you things like that.

在程序运行时,使用 jconsole(与 JDK 一起提供)查看堆大小。Jconsole 非常易于使用,您无需配置 JVM 即可使用它。因此,从这个意义上说,它比分析器容易得多,但您不会获得有关程序的详细信息。但是,您可以更好地看到内存的三个部分(Eden、Survivor 和 Tenured)。有时,即使您已为 JVM 分配了大量内存,奇怪的事情也会导致这些区域之一耗尽内存。JConsole 会向您展示类似的东西。

回答by Brian

Sounds like you've not allocated enough memory to Tomcat - you can address this by specifying -Xmx512m for example to allocate up to 512Mb of memory.

听起来您没有为 Tomcat 分配足够的内存 - 您可以通过指定 -Xmx512m 来解决此问题,例如分配多达 512Mb 的内存。

See herefor more details on how to set this.

有关如何设置的更多详细信息,请参见此处

回答by BalusC

Best fix would be to enhance the code so that it does not unnecessarily duplicatethe data in memory. The stacktrace suggests that the code is trying to clone the file contents in memory. If it is not possible to enhance the (3rd party?) code so that it does not do that, but instead immediatelyprocesses it, then rather configure/use commons FileUpload so that it does not keep the uploaded files in memory, but instead in a temp storage at the local disk file system.

最好的解决方法是增强代码,使其不会不必要地复制内存中的数据。堆栈跟踪表明代码正在尝试克隆内存中的文件内容。如果无法增强(第 3 方?)代码,使其不这样做,而是立即处理它,那么配置/使用公共 FileUpload 以便它不会将上传的文件保存在内存中,而是在本地磁盘文件系统中的临时存储。

Your best try to minimize the memory used would then be using the DiskFileItemFactorywith a little threshold (which defaults to 10KB by the way, which is affordable).

您最好的尝试将使用的内存最小化,然后使用DiskFileItemFactory一个小的阈值(顺便说一下,默认为 10KB,这是负担得起的)。

ServletFileUpload upload = new ServletFileUpload(new DiskFileItemFactory());

This way you've enough memory for the real processing.

这样你就有足够的内存来进行真正的处理。

If you still hits the heap memory limits, then the next step would indeed be increasing the heap.

如果您仍然达到堆内存限制,那么下一步确实是增加堆。

回答by OscarRyz

You're trying to use more RAM than what you have.

您正在尝试使用比现有更多的 RAM。

Try increasing the maximum memory by adding to the -Xmxflag when starting your program.

尝试通过-Xmx在启动程序时添加标志来增加最大内存。

java -Xmx128m  youProgram 

That would assign 128 megabytes of memory as maximumto your program.

这将为您的程序分配最多128 兆字节的内存。

回答by Yannick Loriot

The java.lang.OutOfMemoryError means that you have exceeded the memory allocated by the JVM. Use the -Xmx to change the maximum memory heap size of your JVM. (thank you Software Monkey)

java.lang.OutOfMemoryError 表示您已经超出了 JVM 分配的内存。使用 -Xmx 更改 JVM 的最大内存堆大小。(感谢软件猴)

You can make this to see what is the size of the jvm memory:

你可以通过这个来查看 jvm 内存的大小:

MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
System.out.println( memoryBean.getHeapMemoryUsage() );