Node.js “致命错误:JS 分配失败 - 进程内存不足” - 可以获得堆栈跟踪吗?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/13616770/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 16:44:02  来源:igfitidea点击:

Node.js "FATAL ERROR: JS Allocation failed - process out of memory" -- possible to get a stack trace?

node.js

提问by Zane Claes

Well... I'm back to square one. I can't figure this out for the life of me.

嗯……我又回到了原点。我这辈子都搞不清楚。

I'm getting the following error:

我收到以下错误:

FATAL ERROR: JS Allocation failed - process out of memory

I could enumerate the dozens (yes, dozens) of things I've tried to get to the root of this problem, but really it would be far too much. So here are the key points:

我可以列举出我试图找到这个问题根源的几十件事(是的,几十件事),但实际上这太过分了。所以这里是关键点:

  • I can only get it to happen on my production server, and my app is large and complicated, so it is proving very difficult to isolate
  • It happens even though heap size & RSS size are both < 200 Mb, which should not be a problem given that the machines (Amazon Cloud, CentOS, m1.large) have 8Gb RAM
  • 我只能让它在我的生产服务器上发生,而且我的应用程序又大又复杂,所以很难隔离
  • 即使堆大小和 RSS 大小都小于 200 Mb,它也会发生,鉴于机器(Amazon Cloud、CentOS、m1.large)具有 8Gb RAM,这应该不是问题

My assumption is that(because of the 2nd point), a leak is probably not the cause; rather, it seems like there's probably a SINGLE object that is very large. The following thread backs up this theory:: In Node.js using JSON.stringify results in 'process out of memory' error

我的假设是(因为第二点),泄漏可能不是原因;相反,似乎有一个非常大的 SINGLE 对象。以下线程支持这一理论:在 Node.js 中使用 JSON.stringify 导致“进程内存不足”错误

What I really need is some way to find out what the state of the memory is at the moment the application crashes, or perhaps a stack trace leading up to the FATAL ERROR.

我真正需要的是某种方法来找出应用程序崩溃时内存的状态,或者可能是导致致命错误的堆栈跟踪。

Based upon my assumption above, a 10-minute-old heap dump is insufficient (since the object would have not resided in memory).

根据我上面的假设,10 分钟前的堆转储是不够的(因为对象不会驻留在内存中)。

采纳答案by Zane Claes

I have to give huge props to Trevor Norris on this one for helping to modify node.js itselfsuch that it would automatically generate a heap dump when this error happened.

我必须在这一点上为 Trevor Norris提供巨大的支持,以帮助修改 node.js 本身,以便在发生此错误时自动生成堆转储。

Ultimately what solved this problem for me, though, was much more mundane. I wrote some simple code that appended the endpoint of each incoming API request to a log file. I waited to gather ~10 data points (crashes) and compared the endpoints which had been run 60sec before the crash. I found that in 9/10 cases, a single endpoint that had been hit just before the crash.

不过,最终为我解决这个问题的方法要平凡得多。我编写了一些简单的代码,将每个传入 API 请求的端点附加到日志文件中。我等待收集约 10 个数据点(崩溃)并比较崩溃前 60 秒运行的端点。我发现在 9/10 的情况下,一个端点在崩溃前就被击中了。

From there, it was just a matter of digging deeper into the code. I pared everything down -- returning less data from my mongoDB queries, passing only necessary data from an object back to the callback, etc. Now we've gone 6x longer than average without a single crash on any of the servers, leading me to hopethat it is resolved... for now.

从那里开始,只需深入挖掘代码即可。我削减了一切——从我的 mongoDB 查询中返回更少的数据,只将必要的数据从对象传递回回调,等等。现在我们已经比平均时间长了 6 倍,没有在任何服务器上发生一次崩溃,导致我希望它得到解决......现在。

回答by Jesse

Just because this is the top answer on Google at the moment, I figured I'd add a solution for a case I just ran across:

仅仅因为这是目前谷歌上的最佳答案,我想我会为我刚刚遇到的一个案例添加一个解决方案:

I had this issue using express with ejs templates - the issue was that I failed to close an ejs block, and the file was js code - something like this:

我在使用 express 和 ejs 模板时遇到了这个问题 - 问题是我未能关闭 ejs 块,并且文件是 js 代码 - 如下所示:

var url = '<%=getUrl("/some/url")'
/* lots more javascript that ejs tries to parse in memory apparently */

This is obviously a super specific case, OP's solution should be used the majority of the time. However, OP's solution would not work for this (ejs stack trace won't be surfaced by ofe).

这显然是一个超级特殊的案例,大部分时间都应该使用 OP 的解决方案。但是,OP 的解决方案对此不起作用(ejs 堆栈跟踪不会由 浮出水面ofe)。

回答by Xaviju

There is no single solution for this problem.
I read different cases, most of them related to JS, but in my case, for example, was just a broken jade template loop that was infinite because of a code bug.

这个问题没有单一的解决方案。
我读过不同的案例,其中大部分与 JS 相关,但在我的案例中,例如,只是一个破碎的 jade 模板循环,由于代码错误而无限循环。

I guess is just a syntax error that node doesn't manage well.
Check your code or post it to find the problem.

我想这只是 node 管理不善的语法错误。
检查您的代码或发布它以查找问题。

回答by iamthing

In my case I was deploying Rails 4.2.1 via cap production deploy (capistrano) and during the assets precompile received:

在我的情况下,我通过 cap production deploy (capistrano) 和在收到的资产预编译期间部署 Rails 4.2.1:

rake stdout: rake aborted! ExecJS::RuntimeError: FATAL ERROR: Evacuation Allocation failed - process out of memory (execjs):1

耙标准输出:耙已中止!ExecJS::RuntimeError: FATAL ERROR: Evacuation Allocation failed - process out of memory (execjs):1

I had run a dozen data imports via active_admin earlier and it appears to have used up all the RAM

我之前通过 active_admin 运行了十几个数据导入,它似乎已经用完了所有的 RAM

Solution:Server restart and deploy ran first time....

解决方案:服务器重启和部署第一次运行....

回答by Tracker1

Could it be a recursion issue on an object you are serializing, that is just large to begin with, and runs out of memory before recursion becomes an issue?

它可能是您正在序列化的对象的递归问题,它刚开始就很大,并且在递归成为问题之前耗尽内存?

I created the safe-clone-deepnpm module for this reason... basically you'll want to do the following.

出于这个原因,我创建了安全克隆深度npm 模块......基本上你需要执行以下操作。

var clone = require('safe-clone-deep');
...
   return JSON.stringify(clone(originalObject));

This will allow you to clone pretty much any object that will then serialize safely. Also, if one of the objects inherits from Errorit will serialize the inherited name, messageand stackproperties, since these don't typically serialize.

这将允许您克隆几乎所有可以安全序列化的对象。此外,如果其中一个对象从Error它继承将序列化继承的name,messagestack属性,因为这些通常不会序列化。

回答by grahamrhay

In our case, we had accidentally allocated a huge (sparse) array that caused util.format to blow up:

在我们的例子中,我们不小心分配了一个巨大的(稀疏)数组,导致 util.format 爆炸:

http://grahamrhay.wordpress.com/2014/02/24/fatal-error-js-allocation-failed-process-out-of-memory/

http://grahamrhay.wordpress.com/2014/02/24/fatal-error-js-allocation-failed-process-out-of-memory/

回答by Sourab Reddy

Analysing a number of cases , the most common problem is that of an infinite loop . This would be difficult to solve in a complex app , that is where test driven development comes handy !!

分析多种情况,最常见的问题是无限循环问题。这在复杂的应用程序中很难解决,这就是测试驱动开发派上用场的地方!!

回答by Dave

In my case I had initialised an associative array (Object) using []. As soon as I initialised it as {} the problem went away.

就我而言,我使用 [] 初始化了一个关联数组(对象)。一旦我将它初始化为 {},问题就消失了。

回答by kendlete

In my case, a file I was using to seed the db during development was causing the leak. For some reason node didn't like a multi-line comment I had at the end of the file. I can't see anything wrong with it, but a process of elimination means I know it's this section of this file.

就我而言,我在开发过程中用来为数据库播种的文件导致了泄漏。出于某种原因,节点不喜欢我在文件末尾的多行注释。我看不出有什么问题,但是消除过程意味着我知道这是该文件的这一部分。

回答by Dmitry Grinko

I've faced the same issue while installing the node packages on a server using npm i.

我在使用npm i.

FATAL ERROR: Committing semi space failed. Allocation failed - process out of memory
 1: node::Abort() [npm]
 2: 0x556f73a6e011 [npm]
 3: v8::Utils::ReportOOMFailure(char const*, bool) [npm]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [npm]
 5: v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [npm]
 6: v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [npm]
 7: v8::internal::Heap::CollectAllGarbage(int, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [npm]
 8: v8::internal::StackGuard::HandleInterrupts() [npm]
 9: v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) [npm]
10: 0x159539b040bd
Aborted

My solution is to add additional flag

我的解决方案是添加额外的标志

node --max-old-space-size=250 `which npm` i

Hope it saves time for someone

希望它为某人节省时间