Python 在进程之间共享锁

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/25557686/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 20:22:20  来源:igfitidea点击:

Python sharing a lock between processes

pythonlockingmultiprocessingshare

提问by DJMcCarthy12

I am attempting to use a partial function so that pool.map() can target a function that has more than one parameter (in this case a Lock() object).

我正在尝试使用部分函数,​​以便 pool.map() 可以针对具有多个参数的函数(在本例中为 Lock() 对象)。

Here is example code (taken from an answer to a previous question of mine):

这是示例代码(取自我上一个问题的答案):

from functools import partial

def target(lock, iterable_item):
    for item in items:
        # Do cool stuff
        if (... some condition here ...):
            lock.acquire()
            # Write to stdout or logfile, etc.
            lock.release()

def main():
    iterable = [1, 2, 3, 4, 5]
    pool = multiprocessing.Pool()
    l = multiprocessing.Lock()
    func = partial(target, l)
    pool.map(func, iterable)
    pool.close()
    pool.join()

However when I run this code, I get the error:

但是,当我运行此代码时,出现错误:

Runtime Error: Lock objects should only be shared between processes through inheritance.

What am I missing here? How can I share the lock between my subprocesses?

我在这里缺少什么?如何在我的子进程之间共享锁?

采纳答案by dano

You can't pass normal multiprocessing.Lockobjects to Poolmethods, because they can't be pickled. There are two ways to get around this. One is to create Manager()and pass a Manager.Lock():

您不能将普通multiprocessing.Lock对象传递给Pool方法,因为它们不能被腌制。有两种方法可以解决这个问题。一种是创建Manager()并传递一个Manager.Lock()

def main():
    iterable = [1, 2, 3, 4, 5]
    pool = multiprocessing.Pool()
    m = multiprocessing.Manager()
    l = m.Lock()
    func = partial(target, l)
    pool.map(func, iterable)
    pool.close()
    pool.join()

This is a little bit heavyweight, though; using a Managerrequires spawning another process to host the Managerserver. And all calls to acquire/releasethe lock have to be sent to that server via IPC.

不过,这有点重量级;使用 aManager需要产生另一个进程来托管Manager服务器。并且所有对acquire/release锁的调用都必须通过 IPC 发送到该服务器。

The other option is to pass the regular multiprocessing.Lock()at Pool creation time, using the initializerkwarg. This will make your lock instance global in all the child workers:

另一种选择是multiprocessing.Lock()使用initializerkwarg在池创建时传递常规。这将使您的锁实例在所有子工作者中全局化:

def target(iterable_item):
    for item in items:
        # Do cool stuff
        if (... some condition here ...):
            lock.acquire()
            # Write to stdout or logfile, etc.
            lock.release()
def init(l):
    global lock
    lock = l

def main():
    iterable = [1, 2, 3, 4, 5]
    l = multiprocessing.Lock()
    pool = multiprocessing.Pool(initializer=init, initargs=(l,))
    pool.map(target, iterable)
    pool.close()
    pool.join()

The second solution has the side-effect of no longer requiring partial.

第二种解决方案的副作用是不再需要partial.

回答by Tom Pohl

Here's a version (using Barrierinstead of Lock, but you get the idea) which would also work on Windows (where the missing forkis causing additional troubles):

这是一个版本(使用Barrier而不是Lock,但您明白了)也可以在 Windows 上运行(缺少fork会导致额外的麻烦):

import multiprocessing as mp

def procs(uid_barrier):
    uid, barrier = uid_barrier
    print(uid, 'waiting')
    barrier.wait()
    print(uid, 'past barrier')    

def main():
    N_PROCS = 10
    with mp.Manager() as man:
        barrier = man.Barrier(N_PROCS)
        with mp.Pool(N_PROCS) as p:
            p.map(procs, ((uid, barrier) for uid in range(N_PROCS)))

if __name__ == '__main__':
    mp.freeze_support()
    main()