在 Node.js 中等待多个回调的惯用方法

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5172244/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 13:54:35  来源:igfitidea点击:

Idiomatic way to wait for multiple callbacks in Node.js

node.js

提问by Thiago Arrais

Suppose you need to do some operations that depend on some temp file. Since we're talking about Node here, those operations are obviously asynchronous. What is the idiomatic way to wait for all operations to finish in order to know when the temp file can be deleted?

假设您需要执行一些依赖于某个临时文件的操作。由于我们在这里谈论的是 Node,这些操作显然是异步的。等待所有操作完成以知道何时可以删除临时文件的惯用方法是什么?

Here is some code showing what I want to do:

下面是一些代码,显示了我想要做什么:

do_something(tmp_file_name, function(err) {});
do_something_other(tmp_file_name, function(err) {});
fs.unlink(tmp_file_name);

But if I write it this way, the third call can be executed before the first two get a chance to use the file. I need some way to guarantee that the first two calls already finished (invoked their callbacks) before moving on without nesting the calls (and making them synchronous in practice).

但是如果我这样写,第三个调用可以在前两个有机会使用文件之前执行。我需要一些方法来保证前两个调用在继续之前已经完成(调用它们的回调)而不嵌套调用(并在实践中使它们同步)。

I thought about using event emitters on the callbacks and registering a counter as receiver. The counter would receive the finished events and count how many operations were still pending. When the last one finished, it would delete the file. But there is the risk of a race condition and I'm not sure this is usually how this stuff is done.

我想过在回调中使用事件发射器并将计数器注册为接收器。计数器将接收已完成的事件并计算仍有多少操作未决。当最后一个完成时,它将删除该文件。但是存在竞争条件的风险,我不确定这通常是如何完成的。

How do Node people solve this kind of problem?

Node 人是如何解决这类问题的?

采纳答案by Alfred

Update:

更新:

Now I would advise to have a look at:

现在我建议看看:

  • Promises

    The Promise object is used for deferred and asynchronous computations. A Promise represents an operation that hasn't completed yet, but is expected in the future.

    A popular promises library is bluebird. A would advise to have a look at why promises.

    You should use promises to turn this:

    fs.readFile("file.json", function (err, val) {
        if (err) {
            console.error("unable to read file");
        }
        else {
            try {
                val = JSON.parse(val);
                console.log(val.success);
            }
            catch (e) {
                console.error("invalid json in file");
            }
        }
    });
    

    Into this:

    fs.readFileAsync("file.json").then(JSON.parse).then(function (val) {
        console.log(val.success);
    })
    .catch(SyntaxError, function (e) {
        console.error("invalid json in file");
    })
    .catch(function (e) {
        console.error("unable to read file");
    });
    
  • generators:For example via co.

    Generator based control flow goodness for nodejs and the browser, using promises, letting you write non-blocking code in a nice-ish way.

    var co = require('co');
    
    co(function *(){
      // yield any promise
      var result = yield Promise.resolve(true);
    }).catch(onerror);
    
    co(function *(){
      // resolve multiple promises in parallel
      var a = Promise.resolve(1);
      var b = Promise.resolve(2);
      var c = Promise.resolve(3);
      var res = yield [a, b, c];
      console.log(res);
      // => [1, 2, 3]
    }).catch(onerror);
    
    // errors can be try/catched
    co(function *(){
      try {
        yield Promise.reject(new Error('boom'));
      } catch (err) {
        console.error(err.message); // "boom"
     }
    }).catch(onerror);
    
    function onerror(err) {
      // log any uncaught errors
      // co will not throw any errors you do not handle!!!
      // HANDLE ALL YOUR ERRORS!!!
      console.error(err.stack);
    }
    
  • 承诺

    Promise 对象用于延迟和异步计算。Promise 表示尚未完成但预计将在未来完成的操作。

    一个流行的承诺库是bluebird。A 会建议看看为什么承诺

    你应该使用 promise 来解决这个问题:

    fs.readFile("file.json", function (err, val) {
        if (err) {
            console.error("unable to read file");
        }
        else {
            try {
                val = JSON.parse(val);
                console.log(val.success);
            }
            catch (e) {
                console.error("invalid json in file");
            }
        }
    });
    

    进入这个:

    fs.readFileAsync("file.json").then(JSON.parse).then(function (val) {
        console.log(val.success);
    })
    .catch(SyntaxError, function (e) {
        console.error("invalid json in file");
    })
    .catch(function (e) {
        console.error("unable to read file");
    });
    
  • 发电机:例如通过co

    nodejs 和浏览器的基于生成器的控制流优点,使用承诺,让你以一种很好的方式编写非阻塞代码。

    var co = require('co');
    
    co(function *(){
      // yield any promise
      var result = yield Promise.resolve(true);
    }).catch(onerror);
    
    co(function *(){
      // resolve multiple promises in parallel
      var a = Promise.resolve(1);
      var b = Promise.resolve(2);
      var c = Promise.resolve(3);
      var res = yield [a, b, c];
      console.log(res);
      // => [1, 2, 3]
    }).catch(onerror);
    
    // errors can be try/catched
    co(function *(){
      try {
        yield Promise.reject(new Error('boom'));
      } catch (err) {
        console.error(err.message); // "boom"
     }
    }).catch(onerror);
    
    function onerror(err) {
      // log any uncaught errors
      // co will not throw any errors you do not handle!!!
      // HANDLE ALL YOUR ERRORS!!!
      console.error(err.stack);
    }
    


If I understand correctly I think you should have a look at the very good asynclibrary. You should especially have a look at the series. Just a copy from the snippets from github page:

如果我理解正确,我认为您应该看看非常好的异步库。你应该特别看看这个系列。只是来自 github 页面的片段的副本:

async.series([
    function(callback){
        // do some stuff ...
        callback(null, 'one');
    },
    function(callback){
        // do some more stuff ...
        callback(null, 'two');
    },
],
// optional callback
function(err, results){
    // results is now equal to ['one', 'two']
});


// an example using an object instead of an array
async.series({
    one: function(callback){
        setTimeout(function(){
            callback(null, 1);
        }, 200);
    },
    two: function(callback){
        setTimeout(function(){
            callback(null, 2);
        }, 100);
    },
},
function(err, results) {
    // results is now equals to: {one: 1, two: 2}
});

As a plus this library can also run in the browser.

此外,这个库还可以在浏览器中运行。

回答by Michael Dillon

The simplest way increment an integer counter when you start an async operation and then, in the callback, decrement the counter. Depending on the complexity, the callback could check the counter for zero and then delete the file.

最简单的方法是在启动异步操作时递增整数计数器,然后在回调中递减计数器。根据复杂性,回调可以检查计数器为零,然后删除文件。

A little more complex would be to maintain a list of objects, and each object would have any attributes that you need to identify the operation (it could even be the function call) as well as a status code. The callbacks would set the status code to completed.

稍微复杂一点的是维护一个对象列表,每个对象都将具有识别操作(它甚至可能是函数调用)以及状态代码所需的任何属性。回调会将状态代码设置为已完成。

Then you would have a loop that waits (using process.nextTick) and checks to see if all tasks are completed. The advantage of this method over the counter, is that if it is possible for all outstanding tasks to complete, before all tasks are issued, the counter technique would cause you to delete the file prematurely.

然后您将有一个循环等待(使用process.nextTick)并检查所有任务是否已完成。这种方法相对于计数器的优点是,如果所有未完成的任务都可以完成,那么在所有任务发出之前,计数器技术会导致您过早地删除文件。

回答by goofballLogic

// simple countdown latch
function CDL(countdown, completion) {
    this.signal = function() { 
        if(--countdown < 1) completion(); 
    };
}

// usage
var latch = new CDL(10, function() {
    console.log("latch.signal() was called 10 times.");
});

回答by Ricardo Tomasi

There is no "native" solution, but there are a million flow control librariesfor node. You might like Step:

没有“本机”解决方案,但是node有一百万个流控制库。您可能喜欢步骤:

Step(
  function(){
      do_something(tmp_file_name, this.parallel());
      do_something_else(tmp_file_name, this.parallel());
  },
  function(err) {
    if (err) throw err;
    fs.unlink(tmp_file_name);
  }
)

Or, as Michael suggested, counters could be a simpler solution. Take a look at this semaphore mock-up. You'd use it like this:

或者,正如迈克尔所建议的,计数器可能是一个更简单的解决方案。看看这个信号量模型。你会像这样使用它:

do_something1(file, queue('myqueue'));
do_something2(file, queue('myqueue'));

queue.done('myqueue', function(){
  fs.unlink(file);
});

回答by Rob Raisch

I'd like to offer another solution that utilizes the speed and efficiency of the programming paradigm at the very core of Node: events.

我想提供另一种解决方案,它利用 Node 核心编程范式的速度和效率:事件。

Everything you can do with Promises or modules designed to manage flow-control, like async, can be accomplished using events and a simple state-machine, which I believe offers a methodology that is, perhaps, easier to understand than other options.

您可以使用 Promise 或旨在管理流控制的模块所做的一切,例如async,都可以使用事件和简单的状态机来完成,我相信这提供了一种可能比其他选项更容易理解的方法。

For example assume you wish to sum the length of multiple files in parallel:

例如,假设您希望并行计算多个文件的长度:

const EventEmitter = require('events').EventEmitter;

// simple event-driven state machine
const sm = new EventEmitter();

// running state
let context={
  tasks:    0,    // number of total tasks
  active:   0,    // number of active tasks
  results:  []    // task results
};

const next = (result) => { // must be called when each task chain completes

  if(result) { // preserve result of task chain
    context.results.push(result);
  }

  // decrement the number of running tasks
  context.active -= 1; 

  // when all tasks complete, trigger done state
  if(!context.active) { 
    sm.emit('done');
  }
};

// operational states
// start state - initializes context
sm.on('start', (paths) => {
  const len=paths.length;

  console.log(`start: beginning processing of ${len} paths`);

  context.tasks = len;              // total number of tasks
  context.active = len;             // number of active tasks

  sm.emit('forEachPath', paths);    // go to next state
});

// start processing of each path
sm.on('forEachPath', (paths)=>{

  console.log(`forEachPath: starting ${paths.length} process chains`);

  paths.forEach((path) => sm.emit('readPath', path));
});

// read contents from path
sm.on('readPath', (path) => {

  console.log(`  readPath: ${path}`);

  fs.readFile(path,(err,buf) => {
    if(err) {
      sm.emit('error',err);
      return;
    }
    sm.emit('processContent', buf.toString(), path);
  });

});

// compute length of path contents
sm.on('processContent', (str, path) => {

  console.log(`  processContent: ${path}`);

  next(str.length);
});

// when processing is complete
sm.on('done', () => { 
  const total = context.results.reduce((sum,n) => sum + n);
  console.log(`The total of ${context.tasks} files is ${total}`);
});

// error state
sm.on('error', (err) => { throw err; });

// ======================================================
// start processing - ok, let's go
// ======================================================
sm.emit('start', ['file1','file2','file3','file4']);

Which will output:

这将输出:

start: beginning processing of 4 paths
forEachPath: starting 4 process chains
  readPath: file1
  readPath: file2
  processContent: file1
  readPath: file3
  processContent: file2
  processContent: file3
  readPath: file4
  processContent: file4
The total of 4 files is 4021

Note that the ordering of the process chain tasks is dependent upon system load.

请注意,流程链任务的顺序取决于系统负载。

You can envision the program flow as:

您可以将程序流程设想为:

start -> forEachPath -+-> readPath1 -> processContent1 -+-> done
                      +-> readFile2 -> processContent2 -+
                      +-> readFile3 -> processContent3 -+
                      +-> readFile4 -> processContent4 -+

For reuse, it would be trivial to create a module to support the various flow-control patterns, i.e. series, parallel, batch, while, until, etc.

为了重用,创建一个模块来支持各种流控制模式,即串行、并行、批处理、while、until 等,将是微不足道的。

回答by alienhard

The simplest solution is to run the do_something* and unlink in sequence as follows:

最简单的解决方案是依次运行 do_something* 和 unlink,如下所示:

do_something(tmp_file_name, function(err) {
    do_something_other(tmp_file_name, function(err) {
        fs.unlink(tmp_file_name);
    });
});

Unless, for performance reasons, you want to execute do_something() and do_something_other() in parallel, I suggest to keep it simple and go this way.

除非出于性能原因,您想并行执行 do_something() 和 do_something_other(),否则我建议保持简单并采用这种方式。

回答by marverix

With pure Promises it could be a bit more messy, but if you use Deferred Promises then it's not so bad:

使用纯 Promises 可能会有点混乱,但是如果您使用 Deferred Promises 那么它并没有那么糟糕:

Install:

安装:

npm install --save @bitbar/deferred-promise

Modify your code:

修改你的代码:

const DeferredPromise = require('@bitbar/deferred-promise');

const promises = [
  new DeferredPromise(),
  new DeferredPromise()
];

do_something(tmp_file_name, (err) => {
  if (err) {
    promises[0].reject(err);
  } else {
    promises[0].resolve();
  }
});

do_something_other(tmp_file_name, (err) => {
  if (err) {
    promises[1].reject(err);
  } else {
    promises[1].resolve();
  }
});

Promise.all(promises).then( () => {
  fs.unlink(tmp_file_name);
});

回答by Lucio M. Tato

Wait.for https://github.com/luciotato/waitfor

等待https://github.com/luciotato/waitfor

using Wait.for:

使用等待:

var wait=require('wait.for');

...in a fiber...

wait.for(do_something,tmp_file_name);
wait.for(do_something_other,tmp_file_name);
fs.unlink(tmp_file_name);