Mongodb 批量写入错误

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/30355790/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-08 20:26:30  来源:igfitidea点击:

Mongodb bulk write error

mongodbpymongo

提问by David Makovoz

I'm executing bulk write

我正在执行批量写入

bulk = new_packets.initialize_ordered_bulk_op()

bulk = new_packets.initialize_ordered_bulk_op()

bulk.insert(packet)

bulk.insert(packet)

output = bulk.execute()

output = bulk.execute()

and getting an error that I interpret to mean that packet is not a dict. However, I do know that it is a dict. What could be the problem?

并得到一个错误,我解释为该数据包不是字典。但是,我知道这是一个字典。可能是什么问题呢?

Here is the error:

这是错误:

    BulkWriteError                            Traceback (most recent call last)
    <ipython-input-311-93f16dce5714> in <module>()
          2 
          3 bulk.insert(packet)
    ----> 4 output = bulk.execute()

    C:\Users\e306654\AppData\Local\Continuum\Anaconda\lib\site-packages\pymongo\bulk.pyc in execute(self, write_concern)
583         if write_concern and not isinstance(write_concern, dict):
584             raise TypeError('write_concern must be an instance of dict')
    --> 585         return self.__bulk.execute(write_concern)

    C:\Users\e306654\AppData\Local\Continuum\Anaconda\lib\site-packages\pymongo\bulk.pyc in execute(self, write_concern)
429             self.execute_no_results(generator)
430         elif client.max_wire_version > 1:
    --> 431             return self.execute_command(generator, write_concern)
432         else:
433             return self.execute_legacy(generator, write_concern)

    C:\Users\e306654\AppData\Local\Continuum\Anaconda\lib\site-packages\pymongo\bulk.pyc in execute_command(self, generator, write_concern)
296                 full_result['writeErrors'].sort(
297                     key=lambda error: error['index'])
    --> 298             raise BulkWriteError(full_result)
299         return full_result
300 

    BulkWriteError: batch op errors occurred

采纳答案by David Makovoz

Ok, the problem was that i was assigning _id explicitly and it turns out that the string was larger than 12-byte limit, my bad.

好的,问题是我显式分配了 _id,结果字符串大于 12 字节的限制,我不好。

回答by Samer Aamar

It can be many reasons...
the best is that you try...catch... the exception and check in the errors

可能有很多原因......
最好是你尝试......捕捉......异常并检查错误

from pymongo.errors import BulkWriteError
try:
    bulk.execute()
except BulkWriteError as bwe:
    print(bwe.details)
    #you can also take this component and do more analysis
    #werrors = bwe.details['writeErrors']
    raise

回答by Miguel Angel

You should check 2 things:

你应该检查两件事:

  1. Duplicates, if you are defining your own key.
  2. Be able to manage custom types, In my case I was trying to pass a hash type object that was not able to be converted into a valid objectId, and that was leading me to the first point and I felt into a vicious circle (I solve it converting myObject to string.
  1. 重复项,如果您要定义自己的密钥。
  2. 能够管理自定义类型,就我而言,我试图传递一个无法转换为有效 objectId 的哈希类型对象,这将我带到了第一点,我感到陷入了一个恶性循环(我解决了它将 myObject 转换为字符串。

Inserting one by one will give you the idea what is happening.

一一插入将使您了解正在发生的事情。