在 Docker 容器中运行的 JVM 的驻留集大小 (RSS) 和 Java 总提交内存 (NMT) 之间的区别
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/38597965/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container
提问by sunsin1985
Scenario:
设想:
I have a JVM running in a docker container. I did some memory analysis using two tools: 1) top2) Java Native Memory Tracking. The numbers look confusing and I am trying to find whats causing the differences.
我有一个在 docker 容器中运行的 JVM。我使用两个工具进行了一些内存分析:1) top2) Java Native Memory Tracking。数字看起来令人困惑,我试图找出导致差异的原因。
Question:
问题:
The RSS is reported as 1272MB for the Java process and the Total Java Memory is reported as 790.55 MB. How can I explain where did the rest of the memory 1272 - 790.55 = 481.44 MB go?
Java 进程的 RSS 报告为 1272MB,Java 总内存报告为 790.55MB。我如何解释剩余的内存 1272 - 790.55 = 481.44 MB 去了哪里?
Why I want to keep this issue open even after looking at this questionon SO:
为什么即使在 SO 上查看了这个问题后,我仍想保持这个问题的开放性:
I did see the answer and the explanation makes sense. However, after getting output from Java NMT and pmap -x , I am still not able to concretely map which java memory addresses are actually resident and physically mapped. I need some concrete explanation (with detailed steps) to find whats causing this difference between RSS and Java Total committed memory.
我确实看到了答案,而且解释很有意义。但是,在从 Java NMT 和 pmap -x 获得输出后,我仍然无法具体映射哪些 Java 内存地址实际上是常驻和物理映射的。我需要一些具体的解释(带有详细的步骤)来找出导致 RSS 和 Java Total 提交内存之间存在差异的原因。
Top Output
最高输出
Java NMT
网络机器翻译
Docker memory stats
Docker 内存统计
Graphs
图表
I have a docker container running for most than 48 hours. Now, when I see a graph which contains:
我有一个运行超过 48 小时的 docker 容器。现在,当我看到一个包含以下内容的图表时:
- Total memory given to the docker container = 2 GB
- Java Max Heap = 1 GB
- Total committed (JVM) = always less than 800 MB
- Heap Used (JVM) = always less than 200 MB
- Non Heap Used (JVM) = always less than 100 MB.
- RSS = around 1.1 GB.
- 分配给 docker 容器的总内存 = 2 GB
- Java 最大堆 = 1 GB
- 总提交 (JVM) = 始终小于 800 MB
- 已用堆 (JVM) = 始终小于 200 MB
- 未使用堆 (JVM) = 始终小于 100 MB。
- RSS = 大约 1.1 GB。
So, whats eating the memory between 1.1 GB (RSS) and 800 MB (Java Total committed memory)?
那么,1.1 GB (RSS) 和 800 MB(Java 总提交内存)之间的内存是什么?
回答by VonC
You have some clue in "Analyzing java memory usage in a Docker container" from Mikhail Krestjaninoff:
你必须在“一些线索在泊坞容器分析Java内存使用率从”米哈伊尔·Krestjaninoff:
(And to be clear, in May 2019, three years later, the situation does improveswith openJDK 8u212)
(而且是明确的,2019年5月,三年后,该情况不改善与OpenJDK的8u212)
Resident Set Size is the amount of physical memory currently allocated and used by a process (without swapped out pages). It includes the code, data and shared libraries (which are counted in every process which uses them)
Why does docker stats info differ from the ps data?
Answer for the first question is very simple - Docker has a bug (or a feature - depends on your mood): it includes file caches into the total memory usage info. So, we can just avoid this metric and use
ps
info about RSS.Well, ok - but why is RSS higher than Xmx?
Theoretically, in case of a java application
řesident小号等小号IZE是当前分配,并通过一个过程中使用的物理存储器的量(不换出页)。它包括代码、数据和共享库(在使用它们的每个进程中都计算在内)
为什么 docker stats 信息与 ps 数据不同?
第一个问题的答案非常简单 - Docker 有一个错误(或一个功能 - 取决于您的心情):它将文件缓存包含在总内存使用信息中。因此,我们可以避免使用此指标并使用
ps
有关 RSS 的信息。好吧 - 但为什么 RSS 比 Xmx 高?
理论上,在java应用程序的情况下
RSS = Heap size + MetaSpace + OffHeap size
where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itse
Since JDK 1.8.40we have Native Memory Tracker!
As you can see, I've already added
-XX:NativeMemoryTracking=summary
property to the JVM, so we can just invoke it from the command line:
其中 OffHeap 由线程堆栈、直接缓冲区、映射文件(库和 jar)和 JVM 代码组成
从JDK 1.8.40 开始,我们有了Native Memory Tracker!
如您所见,我已经
-XX:NativeMemoryTracking=summary
向 JVM添加了属性,因此我们可以从命令行调用它:
docker exec my-app jcmd 1 VM.native_memory summary
(This is what the OP did)
(这就是 OP 所做的)
Don't worry about the “Unknown” section - seems that NMT is an immature tool and can't deal with CMS GC (this section disappears when you use an another GC).
Keep in mind, that NMT displays “committed” memory, not "resident" (which you get through the ps command). In other words, a memory page can be committed without considering as a resident (until it directly accessed).
That means that NMT results for non-heap areas (heap is always preinitialized) might be bigger than RSS values.
不要担心“未知”部分 - 似乎 NMT 是一个不成熟的工具,无法处理 CMS GC(当您使用另一个 GC 时,此部分会消失)。
请记住,NMT 显示的是“已提交”的内存,而不是“常驻”(通过 ps 命令获得)。换句话说,一个内存页可以在不考虑为常驻者的情况下被提交(直到它被直接访问)。
这意味着非堆区域(堆始终预初始化)的 NMT 结果可能大于 RSS 值。
(that is where "Why does a JVM report more committed memory than the linux process resident set size?" comes in)
(这就是“为什么 JVM 报告的提交内存多于 linux 进程驻留集大小?”的地方)
As a result, despite the fact that we set the jvm heap limit to 256m, our application consumes 367M. The “other” 164M are mostly used for storing class metadata, compiled code, threads and GC data.
First three points are often constants for an application, so the only thing which increases with the heap size is GC data.
This dependency is linear, but the “k
” coefficient (y = kx + b
) is much less then 1.
结果,尽管我们将 jvm 堆限制设置为 256m,但我们的应用程序消耗了 367M。“其他”164M 主要用于存储类元数据、编译代码、线程和 GC 数据。
前三点通常是应用程序的常量,因此唯一随堆大小增加的就是 GC 数据。
这种依赖性是线性的,但“k
”系数 (y = kx + b
) 远小于 1。
More generally, this seems to be followed by issue 15020which reports a similar issue since docker 1.7
更一般地说,这之后似乎是问题 15020,它报告了自 docker 1.7 以来的类似问题
I'm running a simple Scala (JVM) application which loads a lot of data into and out of memory.
I set the JVM to 8G heap (-Xmx8G
). I have a machine with 132G memory, and it can't handle more than 7-8 containers because they grow well past the 8G limit I imposed on the JVM.
我正在运行一个简单的 Scala (JVM) 应用程序,它将大量数据加载到内存中和从内存中加载出来。
我将 JVM 设置为 8G 堆 (-Xmx8G
)。我有一台内存为 132G 的机器,它不能处理超过 7-8 个容器,因为它们的增长远远超过了我对 JVM 施加的 8G 限制。
(docker stat
was reported as misleading before, as it apparently includes file caches into the total memory usage info)
(之前docker stat
被报告为误导,因为它显然将文件缓存包含在总内存使用信息中)
docker stat
shows that each container itself is using much more memory than the JVM is supposed to be using. For instance:
docker stat
显示每个容器本身使用的内存比 JVM 应该使用的内存多得多。例如:
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
dave-1 3.55% 10.61 GB/135.3 GB 7.85% 7.132 MB/959.9 MB
perf-1 3.63% 16.51 GB/135.3 GB 12.21% 30.71 MB/5.115 GB
It almost seems that the JVM is asking the OS for memory, which is allocated within the container, and the JVM is freeing memory as its GC runs, but the container doesn't release the memory back to the main OS. So... memory leak.
JVM 几乎似乎在向操作系统请求内存,这些内存在容器内分配,并且 JVM 在其 GC 运行时释放内存,但容器不会将内存释放回主操作系统。所以......内存泄漏。