java OpenJDK 客户端 VM - 无法分配内存

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/26382989/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-02 09:49:57  来源:igfitidea点击:

OpenJDK Client VM - Cannot allocate memory

javahadoopmemorymapreducejvm

提问by Sushmita Bhattacharya

I am running Hadoop map reduce job on a cluster. I am getting this error.

我在集群上运行 Hadoop map reduce 作业。我收到此错误。

OpenJDK Client VM warning: INFO: os::commit_memory(0x79f20000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (malloc) failed to allocate 104861696 bytes for committing reserved memory.

OpenJDK 客户端 VM 警告:信息:os::commit_memory(0x79f20000, 104861696, 0) 失败;错误='无法分配内存' (errno=12)

没有足够的内存供 Java 运行时环境继续使用。

本机内存分配 (malloc) 未能为提交保留内存分配 104861696 字节。

what to do ?

该怎么办 ?

回答by Shihao Xu

make sure you have swapspace on your machine

确保您swap的机器上有空间

ubuntu@VM-ubuntu:~$ free -m
             total       used       free     shared    buffers     cached
Mem:           994        928         65          0          1         48
-/+ buffers/cache:        878        115
Swap:         4095       1086       3009

notice the Swapline.

注意这Swap条线。

I just encountered this problem on an Elastic Computing instance. Turned out swap space is not mounted by default.

我刚刚在弹性计算实例上遇到了这个问题。原来默认情况下不安装交换空间。

回答by Frunk

You can try to increase the memory allocation size by passing these Runtime Parameters.

您可以尝试通过传递这些运行时参数来增加内存分配大小。

For example:

例如:

java -Xms1024M -Xmx2048M -jar application.jar
  • Xmx is the maximum size
  • Xms is the minimum size
  • Xmx 是最大尺寸
  • Xms 是最小尺寸

回答by Prometheus

There can be a container memory overflow with the parameters that you are using for the JVM

您用于 JVM 的参数可能会导致容器内存溢出

Check if the attributes:

检查属性是否:

yarn.nodemanager.resource.memory-mb
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb

on yarn.xml matches the desired value.

在yarn.xml 匹配所需的值。

For more memory reference, read the:

有关更多内存参考,请阅读:

HortonWorks memory reference

HortonWorks 内存参考

Similar problem

类似问题

Note: This is for Hadoop 2.0, if you are running hadoop 1.0 check the Task attributes.

注意:这是针对 Hadoop 2.0,如果您运行的是 hadoop 1.0,请检查任务属性。