java Kubernetes (minikube) pod OOMKilled,节点中显然有足够的内存

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/45270070/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-03 08:36:41  来源:igfitidea点击:

Kubernetes (minikube) pod OOMKilled with apparently plenty of memory left in node

javamemorykubernetesminikube

提问by DMB3

I'm using minikube, starting it with

我正在使用 minikube,从

minikube start --memory 8192

For 8Gb of RAM for the node. I'm allocating pods with the resource constraints

用于节点的 8Gb RAM。我正在分配具有资源限制的 Pod

    resources:
      limits:
        memory: 256Mi
      requests:
        memory: 256Mi

So 256Mb of RAM for each node which would give me, I would assume, 32 pods until 8Gb memory limit has been reached but the problem is that whenever I reach the 8th pod to be deployed, the 9th will never run because it's constantly OOMKilled.

因此,我假设每个节点有 256Mb 的 RAM,这将给我 32 个 Pod,直到达到 8Gb 内存限制,但问题是每当我到达要部署的第 8 个 Pod 时,第 9 个将永远不会运行,因为它一直在 OOMKilled。

For context, each pod is a Java application with a frolvlad/alpine-oraclejdk8:slim Docker container ran with -Xmx512m -Xms128m (even if JVM was indeed using the full 512Mb instead of 256Mb I would still be far from the 16 pod limit to hit the 8Gb cap).

对于上下文,每个 pod 都是一个 Java 应用程序,带有一个 frolvlad/alpine-oraclejdk8:slim Docker 容器,使用 -Xmx512m -Xms128m 运行(即使 JVM 确实使用了完整的 512Mb 而不是 256Mb,我仍然远离 16 个 pod 的限制达到 8Gb 上限)。

What am I missing here? Why are pods being OOMKilled with apparently so much free allocatable memory left?

我在这里错过了什么?为什么 pod 被 OOMKilled 明显地留下了这么多可用的可分配内存?

Thanks in advance

提前致谢

回答by Radek 'Goblin' Pieczonka

You must understand the way requests and limits work.

您必须了解请求和限制的工作方式。

Requests are the requirements for the amount of allocatable resources required on the node for a pod to get scheduled on it. These will not cause OOMs, they will cause pod not to get scheduled.

请求是对在节点上调度 pod 所需的可分配资源量的要求。这些不会导致 OOM,它们会导致 pod 没有被调度。

Limits, on the other side, are hard limits for given pod. The pod will be capped at this level. So, even if you have 16GB RAM free, but have a 256MiB limit on it, as soon as your pod reaches this level, it will experience an OOM kill.

另一方面,限制是给定 pod 的硬限制。pod 将被限制在这个级别。因此,即使您有 16GB 的可用 RAM,但有 256MiB 的限制,一旦您的 Pod 达到此级别,它就会遇到 OOM 终止。

If you want, you can define only requests. Then, your pods will be able to grow to full node capacity, without being capped.

如果需要,您可以只定义请求。然后,您的 Pod 将能够增长到完整的节点容量,而不会受到限制。

https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/