python多处理池终止
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/16401031/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
python multiprocessing pool terminate
提问by tk421storm
I'm working on a renderfarm, and I need my clients to be able to launch multiple instances of a renderer, without blocking so the client can receive new commands. I've got that working correctly, however I'm having trouble terminating the created processes.
我正在开发一个渲染农场,我需要我的客户端能够启动渲染器的多个实例,而不会阻塞,以便客户端可以接收新命令。我的工作正常,但是我无法终止创建的进程。
At the global level, I define my pool (so that I can access it from any function):
在全局级别,我定义了我的池(以便我可以从任何函数访问它):
p = Pool(2)
I then call my renderer with apply_async:
然后我用 apply_async 调用我的渲染器:
for i in range(totalInstances):
p.apply_async(render, (allRenderArgs[i],args[2]), callback=renderFinished)
p.close()
That function finishes, launches the processes in the background, and waits for new commands. I've made a simple command that will kill the client and stop the renders:
该函数完成,在后台启动进程,并等待新命令。我做了一个简单的命令来杀死客户端并停止渲染:
def close():
'close this client instance'
tn.write ("say "+USER+" is leaving the farm\r\n")
try:
p.terminate()
except Exception,e:
print str(e)
sys.exit()
sys.exit()
It doesn't seem to give an error (it would print the error), the python terminates but the background processes are still running. Can anyone recommend a better way of controlling these launched programs?
它似乎没有给出错误(它会打印错误),python 终止但后台进程仍在运行。谁能推荐一种更好的方法来控制这些启动的程序?
采纳答案by tk421storm
Found the answer to my own question. The primary problem was that I was calling a third-party application rather than a function. When I call the subprocess [either using call() or Popen()] it creates a new instance of python whose only purpose is to call the new application. However when python exits, it will kill this new instance of python and leave the application running.
找到了我自己问题的答案。主要问题是我调用的是第三方应用程序而不是函数。当我调用子进程 [使用 call() 或 Popen()] 时,它会创建一个新的 python 实例,其唯一目的是调用新应用程序。但是,当 python 退出时,它会终止这个新的 python 实例并让应用程序继续运行。
The solution is to do it the hard way, by finding the pid of the python process that is created, getting the children of that pid, and killing them. This code is specific for osx; there is simpler code (that doesn't rely on grep) available for linux.
解决方案是通过查找创建的python 进程的pid,获取该pid 的子进程并杀死它们来以困难的方式进行。此代码特定于 osx;有更简单的代码(不依赖于 grep)可用于 linux。
for process in pool:
processId = process.pid
print "attempting to terminate "+str(processId)
command = " ps -o pid,ppid -ax | grep "+str(processId)+" | cut -f 1 -d \" \" | tail -1"
ps_command = Popen(command, shell=True, stdout=PIPE)
ps_output = ps_command.stdout.read()
retcode = ps_command.wait()
assert retcode == 0, "ps command returned %d" % retcode
print "child process pid: "+ str(ps_output)
os.kill(int(ps_output), signal.SIGTERM)
os.kill(int(processId), signal.SIGTERM)
回答by mdscruggs
If you're still experiencing this issue, you could try simulating a Poolwith daemonic processes(assuming you are starting the pool/processes from a non-daemonic process). I doubt this is the best solution since it seems like your Poolprocesses should be exiting, but this is all I could come up with. I don't know what your callback does so I'm not sure where to put it in my example below.
如果您仍然遇到此问题,您可以尝试Pool使用守护进程模拟一个(假设您从非守护进程启动池/进程)。我怀疑这是最好的解决方案,因为您的Pool进程似乎应该退出,但这就是我所能想到的。我不知道你的回调是做什么的,所以我不确定在下面的例子中把它放在哪里。
I also suggest trying to create your Poolin __main__due to my experience (and the docs) with weirdness occurring when processes are spawned globally. This is especially true if you're on Windows: http://docs.python.org/2/library/multiprocessing.html#windows
我还建议根据我的经验(和文档)尝试创建您的Poolin __main__,当进程在全球生成时会发生奇怪的事情。如果您使用的是 Windows,则尤其如此:http: //docs.python.org/2/library/multiprocessing.html#windows
from multiprocessing import Process, JoinableQueue
# the function for each process in our pool
def pool_func(q):
while True:
allRenderArg, otherArg = q.get() # blocks until the queue has an item
try:
render(allRenderArg, otherArg)
finally: q.task_done()
# best practice to go through main for multiprocessing
if __name__=='__main__':
# create the pool
pool_size = 2
pool = []
q = JoinableQueue()
for x in range(pool_size):
pool.append(Process(target=pool_func, args=(q,)))
# start the pool, making it "daemonic" (the pool should exit when this proc exits)
for p in pool:
p.daemon = True
p.start()
# submit jobs to the queue
for i in range(totalInstances):
q.put((allRenderArgs[i], args[2]))
# wait for all tasks to complete, then exit
q.join()
回答by eri
I found solution: stop pool in separate thread, like this:
我找到了解决方案:在单独的线程中停止池,如下所示:
def close_pool():
global pool
pool.close()
pool.terminate()
pool.join()
def term(*args,**kwargs):
sys.stderr.write('\nStopping...')
# httpd.shutdown()
stophttp = threading.Thread(target=httpd.shutdown)
stophttp.start()
stoppool=threading.Thread(target=close_pool)
stoppool.daemon=True
stoppool.start()
signal.signal(signal.SIGTERM, term)
signal.signal(signal.SIGINT, term)
signal.signal(signal.SIGQUIT, term)
Works fine and always i tested.
工作正常,我总是测试。
回答by rodemon
# -*- coding:utf-8 -*-
import multiprocessing
import time
import sys
import threading
from functools import partial
#> work func
def f(a,b,c,d,e):
print('start')
time.sleep(4)
print(a,b,c,d,e)
###########> subProcess func
#1. start a thead for work func
#2. waiting thead with a timeout
#3. exit the subProcess
###########
def mulPro(f, *args, **kwargs):
timeout = kwargs.get('timeout',None)
#1.
t = threading.Thread(target=f, args=args)
t.setDaemon(True)
t.start()
#2.
t.join(timeout)
#3.
sys.exit()
if __name__ == "__main__":
p = multiprocessing.Pool(5)
for i in range(5):
#1. process the work func with "subProcess func"
new_f = partial(mulPro, f, timeout=8)
#2. fire on
p.apply_async(new_f, args=(1,2,3,4,5),)
# p.apply_async(f, args=(1,2,3,4,5), timeout=2)
for i in range(10):
time.sleep(1)
print(i+1,"s")
p.close()
# p.join()

