python中的多处理-在多个进程之间共享大对象(例如pandas数据帧)

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/22487296/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 01:01:59  来源:igfitidea点击:

multiprocessing in python - sharing large object (e.g. pandas dataframe) between multiple processes

pythonpandasmultiprocessing

提问by Anne

I am using Python multiprocessing, more precisely

我正在使用 Python 多处理,更准确地说

from multiprocessing import Pool
p = Pool(15)

args = [(df, config1), (df, config2), ...] #list of args - df is the same object in each tuple
res = p.map_async(func, args) #func is some arbitrary function
p.close()
p.join()

This approach has a huge memory consumption; eating up pretty much all my RAM (at which point it gets extremely slow, hence making the multiprocessing pretty useless). I assume the problem is that dfis a huge object (a large pandas dataframe) and it gets copied for each process. I have tried using multiprocessing.Valueto share the dataframe without copying

这种方式内存消耗巨大;几乎耗尽了我所有的 RAM(此时它变得非常慢,因此使多处理变得毫无用处)。我认为问题在于这df是一个巨大的对象(一个大熊猫数据帧),并且每个进程都会复制它。我尝试使用multiprocessing.Value共享数据框而不复制

shared_df = multiprocessing.Value(pandas.DataFrame, df)
args = [(shared_df, config1), (shared_df, config2), ...] 

(as suggested in Python multiprocessing shared memory), but that gives me TypeError: this type has no size(same as Sharing a complex object between Python processes?, to which I unfortunately don't understand the answer).

(如Python multiprocessing shared memory 中所建议),但这给了我TypeError: this type has no size(与在 Python 进程之间共享复杂对象相同,不幸的是我不明白答案)。

I am using multiprocessing for the first time and maybe my understanding is not (yet) good enough. Is multiprocessing.Valueactually even the right thing to use in this case? I have seen other suggestions (e.g. queue) but am by now a bit confused. What options are there to share memory, and which one would be best in this case?

我第一次使用多处理,也许我的理解还不够好。是multiprocessing.Value实际上即使在这种情况下使用了正确的事情?我看过其他建议(例如队列),但现在有点困惑。有哪些选项可以共享内存,在这种情况下哪一个最好?

回答by roippi

The first argument to Valueis typecode_or_type. That is defined as:

第一个参数Valuetypecode_or_type。这被定义为:

typecode_or_type determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the array module.*args is passed on to the constructor for the type.

typecode_or_type 确定返回对象的类型:它要么是 ctypes 类型,要么是数组模块使用的那种类型的单字符类型代码。*args 传递给类型的构造函数。

Emphasis mine. So, you simply cannot put a pandas dataframe in a Value, it has to be a ctypes type.

强调我的。因此,您根本无法将 Pandas 数据框放入 a 中Value,它必须是 ctypes 类型

You could instead use a multiprocessing.Managerto serve your singleton dataframe instance to all of your processes. There's a few different ways to end up in the same place - probably the easiest is to just plop your dataframe into the manager's Namespace.

您可以改为使用 amultiprocessing.Manager将您的单例数据帧实例提供给您的所有进程。有几种不同的方法可以在同一个地方结束 - 可能最简单的方法是将您的数据框放入经理的Namespace.

from multiprocessing import Manager

mgr = Manager()
ns = mgr.Namespace()
ns.df = my_dataframe

# now just give your processes access to ns, i.e. most simply
# p = Process(target=worker, args=(ns, work_unit))

Now your dataframe instance is accessible to any process that gets passed a reference to the Manager. Or just pass a reference to the Namespace, it's cleaner.

现在,您的数据帧实例可以被任何传递给 Manager 的引用的进程访问。或者只是传递对 的引用Namespace,它更干净。

One thing I didn't/won't cover is events and signaling - if your processes need to wait for others to finish executing, you'll need to add that in. Here is a pagewith some Eventexamples which also cover with a bit more detail how to use the manager's Namespace.

我没有/不会涵盖的一件事是事件和信号 - 如果您的流程需要等待其他人完成执行,您需要将其添加进去。 这是一个包含一些Event示例的页面,其中也包含一些示例更详细地了解如何使用经理的Namespace.

(note that none of this addresses whether multiprocessingis going to result in tangible performance benefits, this is just giving you the tools to explore that question)

(请注意,这些都没有解决是否multiprocessing会带来切实的性能优势,这只是为您提供了探索该问题的工具)

回答by Mott The Tuple

You can share a pandas dataframe between processes without any memory overhead by creating a data_handler child process. This process receives calls from the other children with specific data requests (i.e. a row, a specific cell, a slice etc..) from your very large dataframe object. Only the data_handler process keeps your dataframe in memory unlike a Manager like Namespace which causes the dataframe to be copied to all child processes. See below for a working example. This can be converted to pool.

您可以通过创建 data_handler 子进程在进程之间共享 Pandas 数据帧,而无需任何内存开销。这个过程从你的非常大的数据帧对象接收来自具有特定数据请求(即一行、特定单元格、切片等)的其他子级的调用。只有 data_handler 进程将您的数据帧保存在内存中,这与 Namespace 等管理器不同,后者导致数据帧被复制到所有子进程。请参阅下面的工作示例。这可以转换为池。

Need a progress bar for this? see my answer here: https://stackoverflow.com/a/55305714/11186769

需要一个进度条吗?在这里看到我的答案:https: //stackoverflow.com/a/55305714/11186769

import time
import Queue
import numpy as np
import pandas as pd
import multiprocessing
from random import randint

#==========================================================
# DATA HANDLER
#==========================================================

def data_handler( queue_c, queue_r, queue_d, n_processes ):

    # Create a big dataframe
    big_df = pd.DataFrame(np.random.randint(
        0,100,size=(100, 4)), columns=list('ABCD'))

    # Handle data requests
    finished = 0
    while finished < n_processes:

        try:
            # Get the index we sent in
            idx = queue_c.get(False)

        except Queue.Empty:
            continue
        else:
            if idx == 'finished':
                finished += 1
            else:
                try:
                    # Use the big_df here!
                    B_data = big_df.loc[ idx, 'B' ]

                    # Send back some data
                    queue_r.put(B_data)
                except:
                    pass    

# big_df may need to be deleted at the end. 
#import gc; del big_df; gc.collect()

#==========================================================
# PROCESS DATA
#==========================================================

def process_data( queue_c, queue_r, queue_d):

    data = []

    # Save computer memory with a generator
    generator = ( randint(0,x) for x in range(100) )

    for g in generator:

        """
        Lets make a request by sending
        in the index of the data we want. 
        Keep in mind you may receive another 
        child processes return call, which is
        fine if order isnt important.
        """

        #print(g)

        # Send an index value
        queue_c.put(g)

        # Handle the return call
        while True:
            try:
                return_call = queue_r.get(False)
            except Queue.Empty:
                continue
            else:
                data.append(return_call)
                break

    queue_c.put('finished')
    queue_d.put(data)   

#==========================================================
# START MULTIPROCESSING
#==========================================================

def multiprocess( n_processes ):

    combined  = []
    processes = []

    # Create queues
    queue_data = multiprocessing.Queue()
    queue_call = multiprocessing.Queue()
    queue_receive = multiprocessing.Queue()

    for process in range(n_processes): 

        if process == 0:

                # Load your data_handler once here
                p = multiprocessing.Process(target = data_handler,
                args=(queue_call, queue_receive, queue_data, n_processes))
                processes.append(p)
                p.start()

        p = multiprocessing.Process(target = process_data,
        args=(queue_call, queue_receive, queue_data))
        processes.append(p)
        p.start()

    for i in range(n_processes):
        data_list = queue_data.get()    
        combined += data_list

    for p in processes:
        p.join()    

    # Your B values
    print(combined)


if __name__ == "__main__":

    multiprocess( n_processes = 4 )