Redis 比 mongoDB 快多少?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/5252577/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How much faster is Redis than mongoDB?
提问by Homer6
It's widely mentioned that Redis is "Blazing Fast" and mongoDB is fast too. But, I'm having trouble finding actual numbers comparing the results of the two. Given similar configurations, features and operations (and maybe showing how the factor changes with different configurations and operations), etc, is Redis 10x faster?, 2x faster?, 5x faster?
人们普遍提到 Redis “极快”,而 mongoDB 也很快。但是,我很难找到比较两者结果的实际数字。考虑到类似的配置、特性和操作(并且可能显示因子如何随着不同的配置和操作而变化)等,Redis 是快 10 倍?,快 2 倍?,快 5 倍?
I'm ONLY speaking of performance. I understand that mongoDB is a different tool and has a richer feature set. This is not the "Is mongoDB betterthan Redis" debate. I'm asking, by what margin does Redis outperform mongoDB?
我只说性能。我知道 mongoDB 是一个不同的工具,具有更丰富的功能集。这不是“mongoDB 是否比 Redis更好”的争论。我在问,Redis 在多大程度上优于 mongoDB?
At this point, even cheap benchmarks are better than no benchmarks.
在这一点上,即使是廉价的基准测试也比没有基准测试要好。
回答by zeekay
Rough results from the following benchmark: 2x write, 3x read.
以下基准测试的粗略结果:2x write, 3x read。
Here's a simple benchmark in python you can adapt to your purposes, I was looking at how well each would perform simply setting/retrieving values:
这是一个简单的 Python 基准测试,您可以根据自己的目的进行调整,我正在研究每个基准测试在简单设置/检索值时的表现如何:
#!/usr/bin/env python2.7
import sys, time
from pymongo import Connection
import redis
# connect to redis & mongodb
redis = redis.Redis()
mongo = Connection().test
collection = mongo['test']
collection.ensure_index('key', unique=True)
def mongo_set(data):
for k, v in data.iteritems():
collection.insert({'key': k, 'value': v})
def mongo_get(data):
for k in data.iterkeys():
val = collection.find_one({'key': k}, fields=('value',)).get('value')
def redis_set(data):
for k, v in data.iteritems():
redis.set(k, v)
def redis_get(data):
for k in data.iterkeys():
val = redis.get(k)
def do_tests(num, tests):
# setup dict with key/values to retrieve
data = {'key' + str(i): 'val' + str(i)*100 for i in range(num)}
# run tests
for test in tests:
start = time.time()
test(data)
elapsed = time.time() - start
print "Completed %s: %d ops in %.2f seconds : %.1f ops/sec" % (test.__name__, num, elapsed, num / elapsed)
if __name__ == '__main__':
num = 1000 if len(sys.argv) == 1 else int(sys.argv[1])
tests = [mongo_set, mongo_get, redis_set, redis_get] # order of tests is significant here!
do_tests(num, tests)
Results for with mongodb 1.8.1 and redis 2.2.5 and latest pymongo/redis-py:
使用 mongodb 1.8.1 和 redis 2.2.5 以及最新的 pymongo/redis-py 的结果:
$ ./cache_benchmark.py 10000
Completed mongo_set: 10000 ops in 1.40 seconds : 7167.6 ops/sec
Completed mongo_get: 10000 ops in 2.38 seconds : 4206.2 ops/sec
Completed redis_set: 10000 ops in 0.78 seconds : 12752.6 ops/sec
Completed redis_get: 10000 ops in 0.89 seconds : 11277.0 ops/sec
Take the results with a grain of salt of course! If you are programming in another language, using other clients/different implementations, etc your results will vary wildy. Not to mention your usage will be completely different! Your best bet is to benchmark them yourself, in precisely the manner you are intending to use them. As a corollary you'll probably figure out the bestway to make use of each. Always benchmark for yourself!
当然,对结果持保留态度!如果您使用另一种语言进行编程,使用其他客户端/不同的实现等,您的结果会大不相同。更何况你的用法会完全不同!最好的办法是自己对它们进行基准测试,准确地按照您打算使用它们的方式。作为推论,您可能会找出利用每个的最佳方式。始终以自己为基准!
回答by Andrei Andrushkevich
Please check this postabout Redis and MongoDB insertion performance analysis:
请查看这篇关于 Redis 和 MongoDB 插入性能分析的帖子:
Up to 5000 entries mongodb $push is faster even when compared to Redis RPUSH, then it becames incredibly slow, probably the mongodb array type has linear insertion time and so it becomes slower and slower. mongodb might gain a bit of performances by exposing a constant time insertion list type, but even with the linear time array type (which can guarantee constant time look-up) it has its applications for small sets of data.
即使与 Redis RPUSH 相比,最多 5000 个条目 mongodb $push 也更快,然后它变得非常慢,可能 mongodb 数组类型具有线性插入时间,因此它变得越来越慢。mongodb 可能会通过暴露恒定时间插入列表类型来获得一些性能,但即使是线性时间数组类型(可以保证恒定时间查找),它也可以应用于小数据集。
回答by Tareq Salah
Good and simple benchmark
良好而简单的基准
I tried to recalculate the results again using the current versions of redis(2.6.16) and mongo(2.4.8) and here's the result
我尝试使用当前版本的 redis(2.6.16) 和 mongo(2.4.8) 再次重新计算结果,结果如下
Completed mongo_set: 100000 ops in 5.23 seconds : 19134.6 ops/sec
Completed mongo_get: 100000 ops in 36.98 seconds : 2703.9 ops/sec
Completed redis_set: 100000 ops in 6.50 seconds : 15389.4 ops/sec
Completed redis_get: 100000 ops in 5.59 seconds : 17896.3 ops/sec
Also this blog postcompares both of them but using node.js. It shows the effect of increasing number of entries in the database along with time.
这篇博文也比较了两者,但使用的是 node.js。它显示了数据库中条目数量随时间增加的影响。
回答by John F. Miller
Numbers are going to be hard to find as the two are not quite in the same space. The general answer is that Redis 10 - 30% faster when the data set fits within working memory of a single machine. Once that amount of data is exceeded, Redis fails. Mongo will slow down at an amount which depends on the type of load. For an insert only type of load one user recently reported a slowdown of 6 to 7 orders of magnitude (10,000 to 100,000 times) but that report also admitted that there were configuration issues, and that this was a very atypical working load. Normal read heavy loads anecdotally slow by about 10X when some of the data must be read from disk.
数字将很难找到,因为两者不在同一个空间中。一般的答案是,当数据集适合单台机器的工作内存时,Redis 会快 10 - 30%。一旦超过该数据量,Redis 就会失败。Mongo 会减慢速度,具体取决于负载类型。对于仅插入类型的负载,一位用户最近报告说速度下降了 6 到 7 个数量级(10,000 到 100,000 次),但该报告也承认存在配置问题,这是一个非常非典型的工作负载。当某些数据必须从磁盘读取时,正常读取重负载会慢约 10 倍。
Conclusion:Redis will be faster but not by a whole lot.
结论:Redis 会更快,但不会快很多。
回答by mistagrooves
Here is an excellent article about session performancein the Tornado framework about 1 year old. It has a comparison between a few different implementations, of which Redis and MongoDB are included. The graph in the article states that Redis is behind MongoDB by about 10% in this specific use case.
这是一篇关于Tornado 框架中会话性能的优秀文章,大约有 1 年历史。它对几种不同的实现进行了比较,其中包括 Redis 和 MongoDB。文章中的图表指出,在此特定用例中,Redis 落后于 MongoDB 约 10%。
Redis comes with a built in benchmark that will analyze the performance of the machine you are on. There is a ton of raw data from it at the Benchmark wikifor Redis. But you might have to look around a bit for Mongo. Like here, here, and some random polish numbers(but it gives you a starting point for running some MongoDB benchmarks yourself).
Redis 带有一个内置的基准测试,可以分析您所在机器的性能。Benchmark wikifor Redis 中有大量来自它的原始数据。但是您可能需要四处看看 Mongo。像这里、这里和一些随机的抛光数字(但它为您自己运行一些 MongoDB 基准测试提供了一个起点)。
I believe the best solution to this problem is to perform the tests yourself in the types of situations you expect.
我相信这个问题的最佳解决方案是在您期望的情况下自己执行测试。
回答by schwarz
In my case, what has been a determining factor in performance comparison, is the MongoDb WriteConcern that is used. Most mongo drivers nowadays will set the default WriteConcern to ACKNOWLEDGED which means 'written to RAM' (Mongo2.6.3-WriteConcern), in that regards, it was very comparable to redis for most write operations.
就我而言,性能比较的决定性因素是使用的 MongoDb WriteConcern。现在大多数 mongo 驱动程序都会将默认的 WriteConcern 设置为 ACKNOWLEDGED,这意味着“写入 RAM”(Mongo2.6.3-WriteConcern),在这方面,对于大多数写入操作,它与 redis 非常相似。
But the reality is depending on your application needs and production environment setup, you may want to change this concern to WriteConcern.JOURNALED (written to oplog) or WriteConcern.FSYNCED (written to disk) or even written to replica sets (back-ups) if it is needed.
但实际情况取决于您的应用程序需求和生产环境设置,您可能希望将此关注点更改为 WriteConcern.JOURNLED(写入 oplog)或 WriteConcern.FSYNCED(写入磁盘)甚至写入副本集(备份)如果需要的话。
Then you may start seeing some performance decrease. Other important factors also include, how optimized your data access patterns are, index miss % (see mongostat) and indexes in general.
然后您可能会开始看到一些性能下降。其他重要因素还包括数据访问模式的优化程度、索引未命中百分比(请参阅mongostat)和一般索引。
回答by Elior Malul
I think that the 2-3X on the shown benchmark are misleading, since if you it also depends on the hardware you run it on - from my experience, the 'stronger' the machine is, the bigger the gap (in favor of Redis) will be, probably by the fact that the benchmark hits the memory bounds limit pretty fast.
我认为所示基准测试中的 2-3X 具有误导性,因为如果您还取决于运行它的硬件 - 根据我的经验,机器越“强大”,差距越大(有利于 Redis)可能是因为基准测试很快达到了内存限制。
As for the memory capacity - this is partially true, since there are also ways to go around that, there are (commercial) products that writes back Redis data to disk, and also cluster (multi-sharded) solutions that overcome the memory-size limitation.
至于内存容量 - 这部分是正确的,因为也有办法解决这个问题,有(商业)产品可以将 Redis 数据写回磁盘,还有集群(多分片)解决方案可以克服内存大小局限性。