MongoDB 作为队列服务?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/9274777/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-09 12:29:04  来源:igfitidea点击:

MongoDB as a queue service?

mongodbwebqueuesocial

提问by Avi Kapuya

I would love to hear more about real application experience witn MongoDB as a queue service, if you used MongoDB for this purpose could you share your thoughts, as well as the environment in which it was used?

我很想听听更多关于 MongoDB 作为队列服务的真实应用体验,如果您为此目的使用 MongoDB,能否分享您的想法,以及使用它的环境?

回答by Andrew Orsich

I am using mongodb as a queue service for email sending. Soon it will work in following way:

我使用 mongodb 作为电子邮件发送的队列服务。很快它将以以下方式工作:

  1. When a new message comes I store it in the mongodb.
  2. A background job then loads the message from mongodb via the atomic operation findAndModifyand sets the flag Processingto true, so it does not process same message twice (because my background job runs multiple threads in parallel).
  3. Once the email has been sent I remove the document from mongodb.
  4. You can also keep count of the failures for each message and remove it after 3 failed attempts.
  1. 当新消息到来时,我将其存储在 mongodb 中。
  2. 然后后台作业通过原子操作findAndModify从 mongodb 加载消息并将标志设置Processing为 true,因此它不会两次处理相同的消息(因为我的后台作业并行运行多个线程)。
  3. 发送电子邮件后,我从 mongodb 中删除该文档。
  4. 您还可以计算每条消息的失败次数,并在 3 次失败尝试后将其删除。

In general I use mongodb as a queue service only for one reason: because I need to send emails by specified schedule (each message contains information on what time it should be sent).

一般来说,我使用 mongodb 作为队列服务仅出于一个原因:因为我需要按指定的时间表发送电子邮件(每条消息都包含有关应该发送的时间的信息)。

If you do not have any schedule and need to process message immediately, I suggest that you look into existing queue services, because they probably handle all cases that you may not see without a deeper understanding of message queues.

如果您没有任何计划并且需要立即处理消息,我建议您查看现有的队列服务,因为它们可能处理您在没有更深入地了解消息队列的情况下可能看不到的所有情况。

Update

更新

When background job crashes during message processing you could do following:

当后台作业在消息处理期间崩溃时,您可以执行以下操作:

  1. Move this message to another, message queue errors collection or..

  2. Increase processing attempts counter in a message and again assign status "New", to try process it again. Just make sure that background job is idempotent (can process same message multiple times and not corrupt data) and transactional (when job fails you must undone changes that was made. if any). When job fails after 5 attempts (config value) perform #1.

  3. Once bug with message processing was fixed you could process it again once more by assigning "New" status and moving to the message queue, or just delete this message. It depends on business processes actually.

  1. 将此消息移至另一个消息队列错误集合或..

  2. 增加消息中的处理尝试计数器并再次分配状态“新”,以尝试再次处理它。只需确保后台作业是幂等的(可以多次处理相同的消息并且不会损坏数据)和事务性的(当作业失败时,您必须撤消所做的更改。如果有的话)。当作业在 5 次尝试(配置值)后失败时,执行 #1。

  3. 一旦修复了消息处理的错误,您可以通过分配“新建”状态并移动到消息队列来再次处理它,或者只是删除此消息。这实际上取决于业务流程。

回答by Fer To

I know that this question is back from 2012, but during my own research i found this article and just want to inform any other user that the devs from serverdensity replaced rabbitmq in favor of a simple queueing system with mongodb.

我知道这个问题是从 2012 年开始的,但在我自己的研究中,我发现了这篇文章,只是想通知任何其他用户,serverdensity 的开发人员取代了rabbitmq,转而使用 mongodb 的简单排队系统。

A detailed article is given here:

这里给出了详细的文章:

https://blog.serverdensity.com/replacing-rabbitmq-with-mongodb/

https://blog.serverdensity.com/replacing-rabbitmq-with-mongodb/

回答by Sean Reilly

Here is a great articleexplaining how someone used mongoDB's replication oplog as a queue.

这是一篇很棒的文章,解释了某人如何使用 mongoDB 的复制操作日志作为队列。

You can do the same with a different collection. The main piece of advice seems to be to use a capped collection— mongo drivers have efficient means of waiting on a capped collection so that the client isn't constantly polling.

您可以对不同的集合执行相同的操作。主要的建议似乎是使用一个有上限的集合——mongo 驱动程序有有效的方法来等待一个有上限的集合,这样客户端就不会一直在轮询。

回答by justmao945

I've searched a lot and found the JavaScript version https://github.com/chilts/mongodb-queue. But I want a go version, so a simple implementation in Go, including a manager to poll messages was made: https://github.com/justmao945/mongomq

我搜索了很多,找到了 JavaScript 版本https://github.com/chilts/mongodb-queue。但是我想要一个 go 版本,所以在 Go 中做了一个简单的实现,包括一个轮询消息的管理器:https: //github.com/justmao945/mongomq

回答by Neil

Here is a simple message queue implementation.

这是一个简单的消息队列实现

It is a part of articlethat evaluates performance of variety of message queue systems.

它是评估各种消息队列系统性能的文章的一部分。

A single-thread, single-node setup achieves 7 900 msgs/s sent and 1 900 msgs/s received.

单线程、单节点设置可实现 7 900 条消息/秒发送和 1 900 条消息/秒接收。

回答by nickmilon

Here is my Python implementation of PubSub / queueIt works by either a tailing cursor on capped collection or polling a normal collection. Used it a few projects where I wanted to simplify my stack with quite good results. Of course as somebody mentioned already until you reached the limits of the atomic findAndModify, but that can be taken care of by various technics

这是我的 PubSub / queue 的 Python 实现它通过在上限集合上的拖尾游标或轮询普通集合来工作。在一些项目中使用了它,我想简化我的堆栈并获得很好的结果。当然正如有人提到的,直到你达到原子 findAndModify 的极限,但这可以通过各种技术来解决