Python 非阻塞子进程调用

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/16071866/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 21:42:40  来源:igfitidea点击:

Non blocking subprocess.call

pythonsubprocess

提问by DavidJB

I'm trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.

我正在尝试进行非阻塞子进程调用以从我的 main.py 程序运行 slave.py 脚本。我需要将 args 从 main.py 传递给 slave.py 一次,当它(slave.py)在此 slave.py 运行一段时间然后退出后首次通过 subprocess.call 启动时。

main.py
for insert, (list) in enumerate(list, start =1):

    sys.args = [list]
    subprocess.call(["python", "slave.py", sys.args], shell = True)


{loop through program and do more stuff..}

And my slave script

还有我的奴隶脚本

slave.py
print sys.args
while True:
    {do stuff with args in loop till finished}
    time.sleep(30)

Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I've passed args to it. The two scripts no longer need to communicate.

目前,slave.py 阻止 main.py 运行其其余任务,我只是希望 slave.py 独立于 main.py,一旦我将 args 传递给它。这两个脚本不再需要通信。

I've found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion...?

我在网上找到了一些关于非阻塞 subprocess.call 的帖子,但其中大部分都集中在需要与 slave.py 通信的某个点上,而我目前不需要。有谁知道如何以简单的方式实现这一点......?

回答by mgilson

You should use subprocess.Popeninstead of subprocess.call.

你应该使用subprocess.Popen而不是subprocess.call.

Something like:

就像是:

subprocess.Popen(["python", "slave.py"] + sys.argv[1:])

From the docs on subprocess.call:

文档开始subprocess.call

Run the command described by args. Wait for command to complete, then return the returncode attribute.

运行 args 描述的命令。等待命令完成,然后返回 returncode 属性。

(Also don't use a list to pass in the arguments if you're going to use shell = True).

(如果您要使用 ,也不要使用列表来传递参数shell = True)。



Here's a MCVE1example that demonstrates a non-blocking suprocess call:

这是一个演示非阻塞 suprocess 调用的 MCVE 1示例:

import subprocess
import time

p = subprocess.Popen(['sleep', '5'])

while p.poll() is None:
    print('Still sleeping')
    time.sleep(1)

print('Not sleeping any longer.  Exited with returncode %d' % p.returncode)

An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:

依赖于最近对 Python 语言的更改以允许基于协程的并行性的替代方法是:

# python3.5 required but could be modified to work with python3.4.
import asyncio

async def do_subprocess():
    print('Subprocess sleeping')
    proc = await asyncio.create_subprocess_exec('sleep', '5')
    returncode = await proc.wait()
    print('Subprocess done sleeping.  Return code = %d' % returncode)

async def sleep_report(number):
    for i in range(number + 1):
        print('Slept for %d seconds' % i)
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()

tasks = [
    asyncio.ensure_future(do_subprocess()),
    asyncio.ensure_future(sleep_report(5)),
]

loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

1Tested on OS-X using python2.7 & python3.6

1使用 python2.7 & python3.6 在 OS-X 上测试

回答by zwol

There's three levels of thoroughness here.

这里有三个层次的彻底性。

As mgilson says, if you just swap out subprocess.callfor subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processeshanging around, you should save the object returned from subprocess.Popenand at some later point call its waitmethod. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don't want a zombie but you also don't want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemonlibrary to have the slave disassociate itself from the master -- in that case you can continue using subprocess.callin the master.

作为mgilson说,如果你只是换出subprocess.callsubprocess.Popen,保持一切不变,那么main.py不会等待slave.py到结束后再继续。这本身就足够了。如果您关心闲置的僵尸进程,您应该保存返回的对象,subprocess.Popen并在稍后调用其wait方法。(当 main.py 退出时僵尸会自动消失,所以如果 main.py 运行很长时间和/或可能创建许多子进程,这只是一个严重的问题。)最后,如果你不想要僵尸但是您也不想决定在哪里进行等待(如果两个进程之后都运行了很长且不可预测的时间,这可能是合适的),请使用python-daemon库让从站与主站分离——在这种情况下,您可以继续subprocess.call在主站中使用。