Node.js - 超出最大调用堆栈大小

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/20936486/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 16:36:18  来源:igfitidea点击:

Node.js - Maximum call stack size exceeded

node.jsrecursionstack-overflowcallstack

提问by user1518183

When I run my code, Node.js throws a "RangeError: Maximum call stack size exceeded"exception caused by too many recursive calls. I tried to increase Node.js stack size by sudo node --stack-size=16000 app, but Node.js crashes without any error message. When I run this again without sudo, then Node.js prints 'Segmentation fault: 11'. Is there a possibility to solve this without removing my recursive calls?

当我运行我的代码时,Node.js 抛出了一个"RangeError: Maximum call stack size exceeded"由太多递归调用引起的异常。我试图将 Node.js 堆栈大小增加sudo node --stack-size=16000 app,但 Node.js 崩溃而没有任何错误消息。当我在没有 sudo 的情况下再次运行它时,Node.js 会打印'Segmentation fault: 11'. 有没有可能在不删除我的递归调用的情况下解决这个问题?

回答by heinob

You should wrap your recursive function call into a

您应该将递归函数调用包装成

  • setTimeout,
  • setImmediateor
  • process.nextTick
  • setTimeout,
  • setImmediate或者
  • process.nextTick

function to give node.js the chance to clear the stack. If you don't do that and there are many loops without any realasync function call or if you do not wait for the callback, your RangeError: Maximum call stack size exceededwill be inevitable.

函数让 node.js 有机会清除堆栈。如果你不这样做,并且有很多循环没有任何真正的异步函数调用,或者如果你不等待回调,你RangeError: Maximum call stack size exceeded不可避免

There are many articles concerning "Potential Async Loop". Here is one.

有很多关于“潜在异步循环”的文章。这是一个

Now some more example code:

现在还有一些示例代码:

// ANTI-PATTERN
// THIS WILL CRASH

var condition = false, // potential means "maybe never"
    max = 1000000;

function potAsyncLoop( i, resume ) {
    if( i < max ) {
        if( condition ) { 
            someAsyncFunc( function( err, result ) { 
                potAsyncLoop( i+1, callback );
            });
        } else {
            // this will crash after some rounds with
            // "stack exceed", because control is never given back
            // to the browser 
            // -> no GC and browser "dead" ... "VERY BAD"
            potAsyncLoop( i+1, resume ); 
        }
    } else {
        resume();
    }
}
potAsyncLoop( 0, function() {
    // code after the loop
    ...
});

This is right:

这是对的:

var condition = false, // potential means "maybe never"
    max = 1000000;

function potAsyncLoop( i, resume ) {
    if( i < max ) {
        if( condition ) { 
            someAsyncFunc( function( err, result ) { 
                potAsyncLoop( i+1, callback );
            });
        } else {
            // Now the browser gets the chance to clear the stack
            // after every round by getting the control back.
            // Afterwards the loop continues
            setTimeout( function() {
                potAsyncLoop( i+1, resume ); 
            }, 0 );
        }
    } else {
        resume();
    }
}
potAsyncLoop( 0, function() {
    // code after the loop
    ...
});

Now your loop may become too slow, because we loose a little time (one browser roundtrip) per round. But you do not have to call setTimeoutin every round. Normally it is o.k. to do it every 1000th time. But this may differ depending on your stack size:

现在您的循环可能会变得太慢,因为我们每轮都会浪费一点时间(一次浏览器往返)。但是您不必setTimeout在每一轮都跟注。通常每 1000 次执行一次就可以了。但这可能会因您的堆栈大小而异:

var condition = false, // potential means "maybe never"
    max = 1000000;

function potAsyncLoop( i, resume ) {
    if( i < max ) {
        if( condition ) { 
            someAsyncFunc( function( err, result ) { 
                potAsyncLoop( i+1, callback );
            });
        } else {
            if( i % 1000 === 0 ) {
                setTimeout( function() {
                    potAsyncLoop( i+1, resume ); 
                }, 0 );
            } else {
                potAsyncLoop( i+1, resume ); 
            }
        }
    } else {
        resume();
    }
}
potAsyncLoop( 0, function() {
    // code after the loop
    ...
});

回答by user1518183

I found a dirty solution:

我找到了一个肮脏的解决方案:

/bin/bash -c "ulimit -s 65500; exec /usr/local/bin/node --stack-size=65500 /path/to/app.js"

It just increase call stack limit. I think that this is not suitable for production code, but I needed it for script that run only once.

它只是增加了调用堆栈限制。我认为这不适合生产代码,但我需要它用于只运行一次的脚本。

回答by Angular University

In some languages this can be solved with tail call optimization, where the recursion call is transformed under the hood into a loop so no maximum stack size reached error exists.

在某些语言中,这可以通过尾调用优化来解决,其中递归调用在后台转换为循环,因此不存在达到最大堆栈大小的错误。

But in javascript the current engines don't support this, it's foreseen for new version of the language Ecmascript 6.

但是在 javascript 中,当前的引擎不支持这一点,可以预见新版本的Ecmascript 6语言。

Node.js has some flags to enable ES6 features but tail call is not yet available.

Node.js 有一些标志来启用 ES6 功能,但尾调用尚不可用。

So you can refactor your code to implement a technique called trampolining, or refactor in order to transform recursion into a loop.

因此,您可以重构代码以实现称为蹦床的技术,或重构以将递归转换为循环

回答by Werlious

I had a similar issue as this. I had an issue with using multiple Array.map()'s in a row (around 8 maps at once) and was getting a maximum_call_stack_exceeded error. I solved this by changing the map's into 'for' loops

我有一个类似的问题。我在连续使用多个 Array.map() 时遇到了问题(一次大约 8 个映射),并且出现了 maximum_call_stack_exceeded 错误。我通过将地图更改为“for”循环来解决此问题

So if you are using alot of map calls, changing them to for loops may fix the problem

因此,如果您使用大量 map 调用,将它们更改为 for 循环可能会解决问题

Edit

编辑

Just for clarity and probably-not-needed-but-good-to-know-info, using .map()causes the array to be prepped (resolving getters , etc) and the callback to be cached, and also internally keeps an index of the array (so the callback is provided with the correct index/value). This stacks with each nested call, and caution is advised when not nested as well, as the next .map()could be called before the first array is garbage collected (if at all).

只是为了清楚起见和可能不需要但最好知道的信息, using.map()会导致准备好数组(解析 getter 等)并缓存回调,并且还在内部保留数组的索引(因此回调提供了正确的索引/值)。这与每个嵌套调用堆叠在一起,并且在未嵌套时建议小心,因为.map()可以在第一个数组被垃圾收集之前调用下一个数组(如果有的话)。

Take this example:

拿这个例子:

var cb = *some callback function*
var arr1 , arr2 , arr3 = [*some large data set]
arr1.map(v => {
    *do something
})
cb(arr1)
arr2.map(v => {
    *do something // even though v is overwritten, and the first array
                  // has been passed through, it is still in memory
                  // because of the cached calls to the callback function
}) 

If we change this to:

如果我们将其更改为:

for(var|let|const v in|of arr1) {
    *do something
}
cb(arr1)
for(var|let|const v in|of arr2) {
    *do something  // Here there is not callback function to 
                   // store a reference for, and the array has 
                   // already been passed of (gone out of scope)
                   // so the garbage collector has an opportunity
                   // to remove the array if it runs low on memory
}

I hope this makes some sense (I don't have the best way with words) and helps a few to prevent the head scratching I went through

我希望这是有道理的(我没有最好的语言表达方式)并帮助一些人防止我经历的头部挠头

If anyone is interested, here is also a performance test comparing map and for loops (not my work).

如果有人感兴趣,这里还有一个性能测试比较 map 和 for 循环(不是我的工作)。

https://github.com/dg92/Performance-Analysis-JS

https://github.com/dg92/Performance-Analysis-JS

For loops are usually better than map, but not reduce, filter, or find

for 循环通常比 map 好,但不包括 reduce、filter 或 find

回答by Jeff Lowery

I thought of another approach using function references that limits call stack size without using setTimeout()(Node.js, v10.16.0):

我想到了另一种使用函数引用限制调用堆栈大小而不使用setTimeout()(Node.js, v10.16.0) 的方法

testLoop.js

测试循环.js

let counter = 0;
const max = 1000000000n  // 'n' signifies BigInteger
Error.stackTraceLimit = 100;

const A = () => {
  fp = B;
}

const B = () => {
  fp = A;
}

let fp = B;

const then = process.hrtime.bigint();

for(;;) {
  counter++;
  if (counter > max) {
    const now = process.hrtime.bigint();
    const nanos = now - then;

    console.log({ "runtime(sec)": Number(nanos) / (1000000000.0) })
    throw Error('exit')
  }
  fp()
  continue;
}

output:

输出:

$ node testLoop.js
{ 'runtime(sec)': 18.947094799 }
C:\Users\jlowe\Documents\Projects\clearStack\testLoop.js:25
    throw Error('exit')
    ^

Error: exit
    at Object.<anonymous> (C:\Users\jlowe\Documents\Projects\clearStack\testLoop.js:25:11)
    at Module._compile (internal/modules/cjs/loader.js:776:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
    at Function.Module._load (internal/modules/cjs/loader.js:585:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:829:12)
    at startup (internal/bootstrap/node.js:283:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)

回答by cigol on

Pre:

前:

for me the program with the Max call stack wasn't because of my code. It ended up being a different issue which caused the congestion in the flow of the application. So because I was trying to add too many items to mongoDB without any configuration chances the call stack issue was popping and it took me a few days to figure out what was going on....that said:

对我来说,具有 Max 调用堆栈的程序不是因为我的代码。它最终成为一个不同的问题,导致应用程序流拥塞。因此,因为我试图在没有任何配置机会的情况下向 mongoDB 添加太多项目,所以调用堆栈问题突然出现,我花了几天时间才弄清楚发生了什么......说:



Following up with what @Jeff Lowery answered: I enjoyed this answer so much and it sped up the process of what I was doing by 10x at least.

跟进@Jeff Lowery 的回答:我非常喜欢这个答案,它至少将我正在做的事情的进程加快了 10 倍。

I'm new at programming but I attempted to modularize the answer it. Also, didn't like the error being thrown so I wrapped it in a do while loop instead. If anything I did is incorrect, please feel free to correct me.

我是编程新手,但我试图将答案模块化。另外,不喜欢抛出错误,所以我将它包装在 do while 循环中。如果我做的任何事情不正确,请随时纠正我。

module.exports = function(object) {
    const { max = 1000000000n, fn } = object;
    let counter = 0;
    let running = true;
    Error.stackTraceLimit = 100;
    const A = (fn) => {
        fn();
        flipper = B;
    };
    const B = (fn) => {
        fn();
        flipper = A;
    };
    let flipper = B;
    const then = process.hrtime.bigint();
    do {
        counter++;
        if (counter > max) {
            const now = process.hrtime.bigint();
            const nanos = now - then;
            console.log({ 'runtime(sec)': Number(nanos) / 1000000000.0 });
            running = false;
        }
        flipper(fn);
        continue;
    } while (running);
};

Check out this gist to see the my files and how to call the loop. https://gist.github.com/gngenius02/3c842e5f46d151f730b012037ecd596c

查看此要点以查看我的文件以及如何调用循环。 https://gist.github.com/gngenius02/3c842e5f46d151f730b012037ecd596c

回答by weakish

If you don't want to implement your own wrapper, you can use a queue system, e.g. async.queue, queue.

如果您不想实现自己的包装器,则可以使用队列系统,例如async.queuequeue

回答by serkan

Regarding increasing the max stack size, on 32 bit and 64 bit machines V8's memory allocation defaults are, respectively, 700 MB and 1400 MB. In newer versions of V8, memory limits on 64 bit systems are no longer set by V8, theoretically indicating no limit. However, the OS (Operating System) on which Node is running can always limit the amount of memory V8 can take, so the true limit of any given process cannot be generally stated.

关于增加最大堆栈大小,在 32 位和 64 位机器上,V8 的内存分配默认值分别为 700 MB 和 1400 MB。在较新版本的 V8 中,V8 不再设置 64 位系统的内存限制,理论上表示没有限制。但是,运行 Node 的 OS(操作系统)总是可以限制 V8 可以占用的内存量,因此不能一概而论地说明任何给定进程的真正限制。

Though V8 makes available the --max_old_space_sizeoption, which allows control over the amount of memory available to a process, accepting a value in MB. Should you need to increase memory allocation, simply pass this option the desired value when spawning a Node process.

尽管 V8 提供了该--max_old_space_size选项,它允许控制进程可用的内存量,接受一个以 MB 为单位的值。如果您需要增加内存分配,只需在生成 Node 进程时将此选项传递给所需的值。

It is often an excellent strategy to reduce the available memory allocation for a given Node instance, especially when running many instances. As with stack limits, consider whether massive memory needs are better delegated to a dedicated storage layer, such as an in-memory database or similar.

减少给定节点实例的可用内存分配通常是一个很好的策略,尤其是在运行多个实例时。与堆栈限制一样,请考虑是否将大量内存需求更好地委托给专用存储层,例如内存数据库或类似的。

回答by Abhay Shiro

Please check that the function you are importing and the one that you have declared in the same file do not have the same name.

请检查您正在导入的函数和您在同一文件中声明的函数是否具有相同的名称。

I will give you an example for this error. In express JS (using ES6), consider the following scenario:

我会给你一个这个错误的例子。在 express JS(使用 ES6)中,考虑以下场景:

import {getAllCall} from '../../services/calls';

let getAllCall = () => {
   return getAllCall().then(res => {
      //do something here
   })
}
module.exports = {
getAllCall
}

The above scenario will cause infamous RangeError: Maximum call stack size exceedederror because the function keeps calling itself so many times that it runs out of maximum call stack.

上述情况将导致臭名昭著的RangeError: Maximum call stack size exceeded错误,因为该函数不断调用自身太多次,以至于耗尽了最大调用堆栈。

Most of the times the error is in code (like the one above). Other way of resolving is manually increasing the call stack. Well, this works for certain extreme cases, but it is not recommended.

大多数时候错误出在代码中(如上面的那个)。其他解决方法是手动增加调用堆栈。嗯,这适用于某些极端情况,但不推荐。

Hope my answer helped you.

希望我的回答对你有帮助。

回答by Marcin Kamiński

You can use loop for.

您可以使用循环。

var items = {1, 2, 3}
for(var i = 0; i < items.length; i++) {
  if(i == items.length - 1) {
    res.ok(i);
  }
}