MongoDB 限制内存
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/4365224/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
MongoDB limit memory
提问by Vlad Zloteanu
I am using mongo for storing log files. Both mongoDB and mysql are running on the same machine, virtualizing mongo env is not an option. I am afraid I will soon run into perf issues as the logs table grows very fast. Is there a way to limit resident memory for mongo so that it won't eat all available memory and excessively slow down the mysql server?
我正在使用 mongo 来存储日志文件。mongoDB 和 mysql 都在同一台机器上运行,虚拟化 mongo env 不是一个选项。恐怕我很快就会遇到性能问题,因为日志表增长得非常快。有没有办法限制 mongo 的常驻内存,这样它就不会吃掉所有可用内存并过度减慢 mysql 服务器的速度?
DB machine: Debian 'lenny' 5
DB 机器:Debian 'lenny' 5
Other solutions (please comment):
其他解决方案(请评论):
As we need all historical data, we can not use capped collections, but I am also considering using a cron script that dumps and deletes old data
Should I also consider using smaller keys, as suggested on other forums?
由于我们需要所有历史数据,我们不能使用上限集合,但我也在考虑使用转储和删除旧数据的 cron 脚本
我是否也应该考虑使用较小的密钥,如其他论坛上的建议?
采纳答案by Gates VP
Hey Vlad, you have a couple of simple strategies here regarding logs.
嘿弗拉德,你有几个关于日志的简单策略。
The first thing to know is that Mongo can generally handle lots of successive inserts without a lot of RAM. The reason for this is simple, you only insert or update recent stuff. So the index size grows, but the data will be constantly paged out.
首先要知道的是,Mongo 通常可以在没有大量 RAM 的情况下处理大量连续插入。原因很简单,你只插入或更新最近的东西。所以索引大小会增长,但数据会不断被分页。
Put another way, you can break out the RAM usage into two major parts: index & data.
换句话说,您可以将 RAM 使用情况分为两个主要部分:索引和数据。
If you're running typical logging, the data portion is constantly being flushed away, so only the index really stays in RAM.
如果您正在运行典型的日志记录,数据部分会不断被刷新,因此只有索引真正保留在 RAM 中。
The second thing to know is that you can mitigate the index issue by putting logs into smaller buckets. Think of it this way. If you collect all of the logs into a date-stamped collection (call it logs20101206
), then you can also control the size of the index in RAM.
要知道的第二件事是,您可以通过将日志放入较小的存储桶来缓解索引问题。这么想吧。如果您将所有日志收集到一个带有日期戳的集合中(称为logs20101206
),那么您还可以控制 RAM 中索引的大小。
As you roll over days, the old index will flush from RAM and it won't be accessed again, so it will simply go away.
随着时间的推移,旧索引将从 RAM 中刷新并且不会再次被访问,因此它会消失。
but I am also considering using a cron script that dumps and deletes old data
但我也在考虑使用转储和删除旧数据的 cron 脚本
This method of logging by days also helps delete old data. In three months when you're done with the data you simply do db.logs20101206.drop()
and the collection instantly goes away. Note that you don't reclaim disk space (it's all pre-allocated), but new data will fill up the empty spot.
这种按天记录的方法也有助于删除旧数据。三个月后,当您处理完您只需做的数据时db.logs20101206.drop()
,收集就会立即消失。请注意,您不会回收磁盘空间(都是预先分配的),但新数据将填满空位。
Should I also consider using smaller keys, as suggested on other forums?
我是否也应该考虑使用较小的密钥,如其他论坛上的建议?
Yes.
是的。
In fact, I have it built into my data objects. So I access data using logs.action
or logs->action
, but underneath, the data is actually saved to logs.a
. It's really easy to spend more space on "fields" than on "values", so it's worth shrinking the "fields" and trying to abstract it away elsewhere.
事实上,我已经将它内置到我的数据对象中。所以我使用logs.action
or访问数据logs->action
,但在下面,数据实际上保存到logs.a
. 在“字段”上花费更多空间比在“值”上花费更多的空间真的很容易,因此缩小“字段”并尝试将其抽象到其他地方是值得的。
回答by pu.guan
For version 3.2+, which uses wiredTiger engine, the option --wiredTigerCacheSizeGB
is relevant to the question. You can set it if you know what you are exactly doing. I don't know if it's best practice, just read from the documentand raise it here.
对于使用wiredTiger 引擎的3.2+ 版本,该选项--wiredTigerCacheSizeGB
与问题相关。如果您知道自己在做什么,就可以设置它。我不知道这是否是最佳实践,只需阅读文档并在此处提出即可。
回答by Kdeveloper
For Windows it seems possible to control the amount of memory MongoDB uses, see this tutorial at Captain Codeman:
对于 Windows,似乎可以控制 MongoDB 使用的内存量,请参阅 Codeman 船长的本教程: