node.js 处理 Mongodb 连接的正确方法是什么?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15680985/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
What is the right way to deal with Mongodb connections?
提问by Hyman
I try node.js with mongodb (2.2.2) together using the native node.js drive by 10gen.
我使用 10gen 的原生 node.js 驱动器将 node.js 与 mongodb (2.2.2) 一起尝试。
At first everything went well. But when coming to the concurrency benchmarking part, a lot of errors occured. Frequent connect/close with 1000 concurrencies may cause mongodb reject any further requests with error like:
起初一切都很顺利。但是到了并发基准测试部分时,出现了很多错误。1000 个并发的频繁连接/关闭可能会导致 mongodb 拒绝任何进一步的请求,错误如下:
Error: failed to connect to [localhost:27017]
Error: Could not locate any valid servers in initial seed list
Error: no primary server found in set
Also, if a lot of clients shutdown without explicit close, it'll take mongodb minutes to detect and close them. Which will also cause similar connection problems. (Using /var/log/mongodb/mongodb.log to check the connection status)
此外,如果许多客户端在没有明确关闭的情况下关闭,mongodb 将需要几分钟的时间来检测并关闭它们。这也会导致类似的连接问题。(使用 /var/log/mongodb/mongodb.log 查看连接状态)
I have tried a lot. According to the manual, mongodb don't have connection limitation, but poolSizeoption seems to have no effects to me.
我已经尝试了很多。根据手册,mongodb 没有连接限制,但poolSize选项似乎对我没有影响。
As I have only worked with it in node-mongodb-native module, I'm not very sure what eventually caused the problem. What about the performance in other other languages and drivers?
因为我只在 node-mongodb-native 模块中使用过它,所以我不太确定最终导致问题的原因是什么。其他语言和驱动程序的性能如何?
PS: Currently, using self maintained pool is the only solution I figured out, but using it can not can not solve the problem with replica set. According to my test, replica set seems take much less connections then standalone mongodb. But have no idea why this happens.
PS:目前,使用自维护池是我想到的唯一解决方案,但使用它不能解决副本集的问题。根据我的测试,副本集似乎比独立的 mongodb 需要更少的连接。但不知道为什么会发生这种情况。
Concurrency test code:
并发测试代码:
var MongoClient = require('mongodb').MongoClient;
var uri = "mongodb://192.168.0.123:27017,192.168.0.124:27017/test";
for (var i = 0; i < 1000; i++) {
MongoClient.connect(uri, {
server: {
socketOptions: {
connectTimeoutMS: 3000
}
},
}, function (err, db) {
if (err) {
console.log('error: ', err);
} else {
var col = db.collection('test');
col.insert({abc:1}, function (err, result) {
if (err) {
console.log('insert error: ', err);
} else {
console.log('success: ', result);
}
db.close()
})
}
})
}
Generic-pool solution:
通用池解决方案:
var MongoClient = require('mongodb').MongoClient;
var poolModule = require('generic-pool');
var uri = "mongodb://localhost/test";
var read_pool = poolModule.Pool({
name : 'redis_offer_payment_reader',
create : function(callback) {
MongoClient.connect(uri, {}, function (err, db) {
if (err) {
callback(err);
} else {
callback(null, db);
}
});
},
destroy : function(client) { client.close(); },
max : 400,
// optional. if you set this, make sure to drain() (see step 3)
min : 200,
// specifies how long a resource can stay idle in pool before being removed
idleTimeoutMillis : 30000,
// if true, logs via console.log - can also be a function
log : false
});
var size = [];
for (var i = 0; i < 100000; i++) {
size.push(i);
}
size.forEach(function () {
read_pool.acquire(function (err, db) {
if (err) {
console.log('error: ', err);
} else {
var col = db.collection('test');
col.insert({abc:1}, function (err, result) {
if (err) {
console.log('insert error: ', err);
} else {
//console.log('success: ', result);
}
read_pool.release(db);
})
}
})
})
回答by Hector Correa
Since Node.js is single threaded you shouldn't be opening and closing the connection on each request (like you would do in other multi-threaded environments.)
由于 Node.js 是单线程的,你不应该在每个请求上打开和关闭连接(就像你在其他多线程环境中所做的那样。)
This is a quote from the person that wrote the MongoDB node.js client module:
这是来自编写 MongoDB node.js 客户端模块的人的引用:
“You open do MongoClient.connect once when your app boots up and reuse the db object. It's not a singleton connection pool each .connect creates a new connection pool. So open it once an[d] reuse across all requests.” - christkv https://groups.google.com/forum/#!msg/node-mongodb-native/mSGnnuG8C1o/Hiaqvdu1bWoJ
“当你的应用程序启动并重用 db 对象时,你打开 do MongoClient.connect 一次。它不是一个单独的连接池,每个 .connect 都会创建一个新的连接池。因此,将其打开一次,并在所有请求中重复使用。” - christkv https://groups.google.com/forum/#!msg/node-mongodb-native/mSGnnuG8C1o/Hiaqvdu1bWoJ
回答by Hyman
After looking into Hector's advise. I find Mongodb's connection is quite different from some other databases I ever used. The main difference is native drive in nodejs:MongoClient has it's own connection pool for each MongoClient opened, which pool size is defined by
在调查了赫克托耳的建议之后。我发现 Mongodb 的连接与我使用过的其他一些数据库大不相同。主要区别是 nodejs 中的原生驱动:MongoClient 为每个打开的 MongoClient 都有自己的连接池,池大小由
server:{poolSize: n}
So, open 5 MongoClient connection with poolSize:100, means total 5*100=500 connections to the target Mongodb Uri. In this case, frequent open&close MongoClient connections will definately be a huge burden to the host and finally cause connection problems. That's why I got so much trouble in the first place.
所以,用 poolSize:100 打开 5 个 MongoClient 连接,意味着总共有 5*100=500 个连接到目标 Mongodb Uri。在这种情况下,频繁的打开和关闭 MongoClient 连接肯定会给主机带来巨大的负担,最终导致连接问题。这就是我一开始遇到这么多麻烦的原因。
But as my code has been writen into that way, so I use a connection pool to store a single connection with each distinct URI, and use a simple parallel limiter same size as the poolSize, to avoid load peak get connection errors.
但是由于我的代码是这样编写的,所以我使用一个连接池来存储每个不同 URI 的单个连接,并使用一个与 poolSize 大小相同的简单并行限制器,以避免负载峰值获取连接错误。
Here is my code:
这是我的代码:
/*npm modules start*/
var MongoClient = require('mongodb').MongoClient;
/*npm modules end*/
// simple resouce limitation module, control parallel size
var simple_limit = require('simple_limit').simple_limit;
// one uri, one connection
var client_pool = {};
var default_options = {
server: {
auto_reconnect:true, poolSize: 200,
socketOptions: {
connectTimeoutMS: 1000
}
}
}
var mongodb_pool = function (uri, options) {
this.uri = uri;
options = options || default_options;
this.options = options;
this.poolSize = 10; // default poolSize 10, this will be used in generic pool as max
if (undefined !== options.server && undefined !== options.server.poolSize) {
this.poolSize = options.server.poolSize;// if (in)options defined poolSize, use it
}
}
// cb(err, db)
mongodb_pool.prototype.open = function (cb) {
var self = this;
if (undefined === client_pool[this.uri]) {
console.log('new');
// init pool node with lock and wait list with current callback
client_pool[this.uri] = {
lock: true,
wait: [cb]
}
// open mongodb first
MongoClient.connect(this.uri, this.options, function (err, db) {
if (err) {
cb(err);
} else {
client_pool[self.uri].limiter = new simple_limit(self.poolSize);
client_pool[self.uri].db = db;
client_pool[self.uri].wait.forEach(function (callback) {
client_pool[self.uri].limiter.acquire(function () {
callback(null, client_pool[self.uri].db)
});
})
client_pool[self.uri].lock = false;
}
})
} else if (true === client_pool[this.uri].lock) {
// while one is connecting to the target uri, just wait
client_pool[this.uri].wait.push(cb);
} else {
client_pool[this.uri].limiter.acquire(function () {
cb(null, client_pool[self.uri].db)
});
}
}
// use close to release one connection
mongodb_pool.prototype.close = function () {
client_pool[this.uri].limiter.release();
}
exports.mongodb_pool = mongodb_pool;

