为什么 YARN java 堆空间内存错误?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/29001702/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-11 07:10:55  来源:igfitidea点击:

Why YARN java heap space memory error?

javahadoopmapreduceheapyarn

提问by Kenny Basuki

I want to try about setting memory in YARN, so I'll try to configure some parameter on yarn-site.xml and mapred-site.xml. By the way I use hadoop 2.6.0. But, I get an error when I do a mapreduce job. It says like this :

我想尝试在 YARN 中设置内存,因此我将尝试在 yarn-site.xml 和 mapred-site.xml 上配置一些参数。顺便说一下,我使用 hadoop 2.6.0。但是,当我执行 mapreduce 工作时出现错误。它是这样说的:

15/03/12 10:57:23 INFO mapreduce.Job: Task Id :
attempt_1426132548565_0001_m_000002_0, Status : FAILED
Error: Java heap space
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

I think that I have configured it right, I give map.java.opts and reduce.java.opts the small size = 64 MB. I've try to configure some parameter then, like change the map.java.opts and reduce.java.opts on mapred-site.xml, and I still get this error. I think that I do not really understand how YARN memory works. BY the way for this I try on single node computer.

我认为我已经正确配置了它,我给 map.java.opts 和 reduce.java.opts 小尺寸 = 64 MB。然后我尝试配置一些参数,例如更改 mapred-site.xml 上的 map.java.opts 和 reduce.java.opts,但我仍然收到此错误。我认为我并不真正了解 YARN 内存是如何工作的。顺便说一下,我在单节点计算机上尝试。

回答by Gaurav Mishra

Yarn handles resource management and also serves batch workloads that can use MapReduce and real-time workloads.

Yarn 处理资源管理,还提供可以使用 MapReduce 和实时工作负载的批处理工作负载。

There are memory settings that can be set at the Yarn container level and also at the mapper and reducer level. Memory is requested in increments of the Yarn container size. Mapper and reducer tasks run inside a container.

可以在 Yarn 容器级别以及映射器和化简器级别设置内存设置。内存以 Yarn 容器大小的增量请求。Mapper 和 reducer 任务在容器内运行。

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

mapreduce.map.memory.mb 和 mapreduce.reduce.memory.mb

above parameters describe upper memory limit for the map-reduce task and if memory subscribed by this task exceeds this limit, the corresponding container will be killed.

上面的参数描述了map-reduce任务的内存上限,如果这个任务订阅的内存超过这个限制,相应的容器将被杀死。

These parameters determine the maximum amount of memory that can be assigned to mapper and reduce tasks respectively. Let us look at an example: Mapper is bound by an upper limit for memory which is defined in the configuration parameter mapreduce.map.memory.mb.

这些参数决定了可以分别分配给 mapper 和 reduce 任务的最大内存量。让我们看一个例子:Mapper 受内存上限的限制,该上限在配置参数mapreduce.map.memory.mb 中定义。

However, if the value for yarn.scheduler.minimum-allocation-mbis greater than this value of mapreduce.map.memory.mb, then the yarn.scheduler.minimum-allocation-mbis respected and the containers of that size are given out.

但是,如果yarn.scheduler.minimum-allocation-mb 的值大于mapreduce.map.memory.mb 的值,则考虑yarn.scheduler.minimum-allocation-mb并给出该大小的容器出去。

This parameter needs to be set carefully and if not set properly, this could lead to bad performance or OutOfMemory errors.

此参数需要仔细设置,如果设置不当,可能会导致性能不佳或 OutOfMemory 错误。

mapreduce.reduce.java.opts and mapreduce.map.java.opts

mapreduce.reduce.java.opts 和 mapreduce.map.java.opts

This property value needs to be less than the upper bound for map/reduce task as defined in mapreduce.map.memory.mb/mapreduce.reduce.memory.mb, as it should fit within the memory allocation for the map/reduce task.

此属性值需要小于 map/reduce 任务在mapreduce.map.memory.mb/mapreduce.reduce.memory.mb 中定义的上限,因为它应该适合 map/reduce 任务的内存分配。

回答by Nagendra

What @Gaurav said is correct. I had similar issue,i tried some thing like below.Include below properties in yarn-site.xmland restart VM

@Gaurav 说的是正确的。我有类似的问题,我尝试了一些类似下面的事情。包括下面的属性yarn-site.xml并重新启动VM

<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for    containers</description>
</property>

<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>