Python 如何将数据从 mongodb 导入到 Pandas?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/16249736/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 22:07:36  来源:igfitidea点击:

How to import data from mongodb to pandas?

pythonmongodbpandaspymongo

提问by Nithin

I have a large amount of data in a collection in mongodb which I need to analyze. How do i import that data to pandas?

我在 mongodb 的集合中有大量数据需要分析。我如何将该数据导入熊猫?

I am new to pandas and numpy.

我是熊猫和麻木的新手。

EDIT: The mongodb collection contains sensor values tagged with date and time. The sensor values are of float datatype.

编辑:mongodb 集合包含用日期和时间标记的传感器值。传感器值是浮点数据类型。

Sample Data:

样本数据:

{
"_cls" : "SensorReport",
"_id" : ObjectId("515a963b78f6a035d9fa531b"),
"_types" : [
    "SensorReport"
],
"Readings" : [
    {
        "a" : 0.958069536790466,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:26:35.297Z"),
        "b" : 6.296118156595,
        "_cls" : "Reading"
    },
    {
        "a" : 0.95574014778624,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:09.963Z"),
        "b" : 6.29651468650064,
        "_cls" : "Reading"
    },
    {
        "a" : 0.953648289182713,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:37.545Z"),
        "b" : 7.29679823731148,
        "_cls" : "Reading"
    },
    {
        "a" : 0.955931884300997,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:28:21.369Z"),
        "b" : 6.29642922525632,
        "_cls" : "Reading"
    },
    {
        "a" : 0.95821381,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:20.801Z"),
        "b" : 7.28956613,
        "_cls" : "Reading"
    },
    {
        "a" : 4.95821335,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:36.931Z"),
        "b" : 6.28956574,
        "_cls" : "Reading"
    },
    {
        "a" : 9.95821341,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:42:09.971Z"),
        "b" : 0.28956488,
        "_cls" : "Reading"
    },
    {
        "a" : 1.95667927,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:43:55.463Z"),
        "b" : 0.29115237,
        "_cls" : "Reading"
    }
],
"latestReportTime" : ISODate("2013-04-02T08:43:55.463Z"),
"sensorName" : "56847890-0",
"reportCount" : 8
}

采纳答案by waitingkuo

pymongomight give you a hand, followings are some codes I'm using:

pymongo可能会帮助您,以下是我正在使用的一些代码:

import pandas as pd
from pymongo import MongoClient


def _connect_mongo(host, port, username, password, db):
    """ A util for making a connection to mongo """

    if username and password:
        mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db)
        conn = MongoClient(mongo_uri)
    else:
        conn = MongoClient(host, port)


    return conn[db]


def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True):
    """ Read from Mongo and Store into DataFrame """

    # Connect to MongoDB
    db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)

    # Make a query to the specific DB and Collection
    cursor = db[collection].find(query)

    # Expand the cursor and construct the DataFrame
    df =  pd.DataFrame(list(cursor))

    # Delete the _id
    if no_id:
        del df['_id']

    return df

回答by Jeff

http://docs.mongodb.org/manual/reference/mongoexport

http://docs.mongodb.org/manual/reference/mongoexport

export to csv and use read_csvor JSON and use DataFrame.from_records()

导出到 csv 并使用read_csv或 JSON 并使用DataFrame.from_records()

回答by shx2

Monarydoes exactly that, and it's super fast. (another link)

Monary正是这样做的,而且速度非常快。(另一个链接

See this cool postwhich includes a quick tutorial and some timings.

请参阅这篇很酷的帖子,其中包括一个快速教程和一些时间安排。

回答by saimadhu.polamuri

You can load your mongodb data to pandas DataFrame using this code. It works for me. Hopefully for you too.

您可以使用此代码将 mongodb 数据加载到 Pandas DataFrame。这个对我有用。希望对你也有帮助。

import pymongo
import pandas as pd
from pymongo import MongoClient
client = MongoClient()
db = client.database_name
collection = db.collection_name
data = pd.DataFrame(list(collection.find()))

回答by Deo Leung

Using

使用

pandas.DataFrame(list(...))

will consume a lot of memory if the iterator/generator result is large

如果迭代器/生成器结果很大,将消耗大量内存

better to generate small chunks and concat at the end

最好生成小块并在最后连接

def iterator2dataframes(iterator, chunk_size: int):
  """Turn an iterator into multiple small pandas.DataFrame

  This is a balance between memory and efficiency
  """
  records = []
  frames = []
  for i, record in enumerate(iterator):
    records.append(record)
    if i % chunk_size == chunk_size - 1:
      frames.append(pd.DataFrame(records))
      records = []
  if records:
    frames.append(pd.DataFrame(records))
  return pd.concat(frames)

回答by Dennis Golomazov

For dealing with out-of-core (not fitting into RAM) data efficiently (i.e. with parallel execution), you can try Python Blaze ecosystem: Blaze / Dask / Odo.

为了有效地处理核外(不适合 RAM)数据(即并行执行),您可以尝试Python Blaze 生态系统:Blaze / Dask / Odo。

Blaze (and Odo) has out-of-the-box functions to deal with MongoDB.

Blaze(和Odo)具有处理 MongoDB 的开箱即用函数。

A few useful articles to start off:

一些有用的文章开始:

And an article which shows what amazing things are possible with Blaze stack: Analyzing 1.7 Billion Reddit Comments with Blaze and Impala(essentially, querying 975 Gb of Reddit comments in seconds).

还有一篇文章展示了 Blaze 堆栈可能带来的惊人事情:使用 Blaze 和 Impala 分析 17 亿条 Reddit 评论(本质上,在几秒钟内查询 975 Gb 的 Reddit 评论)。

P.S. I'm not affiliated with any of these technologies.

PS 我不隶属于任何这些技术。

回答by fengwt

import pandas as pd
from odo import odo

data = odo('mongodb://localhost/db::collection', pd.DataFrame)

回答by Cy Bu

As per PEP, simple is better than complicated:

根据 PEP,简单总比复杂好:

import pandas as pd
df = pd.DataFrame.from_records(db.<database_name>.<collection_name>.find())

You can include conditions as you would working with regular mongoDB database or even use find_one() to get only one element from the database, etc.

您可以像使用常规 mongoDB 数据库一样包含条件,甚至可以使用 find_one() 从数据库中仅获取一个元素等。

and voila!

瞧!

回答by Rafael Valero

Following this great answer by waitingkuoI would like to add the possibility of doing that using chunksize in line with .read_sql()and .read_csv(). I enlarge the answer from Deu Leungby avoiding go one by one each 'record' of the 'iterator' / 'cursor'. I will borrow previous read_mongofunction.

遵循waitkuo的这个很好的答案,我想添加使用与.read_sql().read_csv()一致的chunksize 来做到这一点的可能性。我通过避免将“迭代器”/“光标”的每条“记录”一一列出来扩大Deu Leung的答案。我将借用之前的read_mongo函数。

def read_mongo(db, 
           collection, query={}, 
           host='localhost', port=27017, 
           username=None, password=None,
           chunksize = 100, no_id=True):
""" Read from Mongo and Store into DataFrame """


# Connect to MongoDB
#db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
client = MongoClient(host=host, port=port)
# Make a query to the specific DB and Collection
db_aux = client[db]


# Some variables to create the chunks
skips_variable = range(0, db_aux[collection].find(query).count(), int(chunksize))
if len(skips_variable)<=1:
    skips_variable = [0,len(skips_variable)]

# Iteration to create the dataframe in chunks.
for i in range(1,len(skips_variable)):

    # Expand the cursor and construct the DataFrame
    #df_aux =pd.DataFrame(list(cursor_aux[skips_variable[i-1]:skips_variable[i]]))
    df_aux =pd.DataFrame(list(db_aux[collection].find(query)[skips_variable[i-1]:skips_variable[i]]))

    if no_id:
        del df_aux['_id']

    # Concatenate the chunks into a unique df
    if 'df' not in locals():
        df =  df_aux
    else:
        df = pd.concat([df, df_aux], ignore_index=True)

return df

回答by Jordy Cuan

A similar approach like Rafael Valero, waitingkuo and Deu Leung using pagination:

使用分页的类似方法,如 Rafael Valero、waitkuo 和 Deu Leung :

def read_mongo(
       # db, 
       collection, query=None, 
       # host='localhost', port=27017, username=None, password=None,
       chunksize = 100, page_num=1, no_id=True):

    # Connect to MongoDB
    db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)

    # Calculate number of documents to skip
    skips = chunksize * (page_num - 1)

    # Sorry, this is in spanish
    # https://www.toptal.com/python/c%C3%B3digo-buggy-python-los-10-errores-m%C3%A1s-comunes-que-cometen-los-desarrolladores-python/es
    if not query:
        query = {}

    # Make a query to the specific DB and Collection
    cursor = db[collection].find(query).skip(skips).limit(chunksize)

    # Expand the cursor and construct the DataFrame
    df =  pd.DataFrame(list(cursor))

    # Delete the _id
    if no_id:
        del df['_id']

    return df