Python 子进程命令的实时输出
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/18421757/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
live output from subprocess command
提问by DilithiumMatrix
I'm using a python script as a driver for a hydrodynamics code. When it comes time to run the simulation, I use subprocess.Popen
to run the code, collect the output from stdout and stderr into a subprocess.PIPE
--- then I can print (and save to a log-file) the output information, and check for any errors. The problem is, I have no idea how the code is progressing. If I run it directly from the command line, it gives me output about what iteration its at, what time, what the next time-step is, etc.
我使用 python 脚本作为流体动力学代码的驱动程序。当需要运行模拟时,我subprocess.Popen
用来运行代码,将 stdout 和 stderr 的输出收集到一个subprocess.PIPE
--- 然后我可以打印(并保存到日志文件)输出信息,并检查是否有任何错误. 问题是,我不知道代码的进展情况。如果我直接从命令行运行它,它会为我提供有关迭代次数、时间、下一个时间步长等的输出。
Is there a way to both store the output (for logging and error checking), and also produce a live-streaming output?
有没有办法既存储输出(用于日志记录和错误检查),又生成实时流输出?
The relevant section of my code:
我的代码的相关部分:
ret_val = subprocess.Popen( run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True )
output, errors = ret_val.communicate()
log_file.write(output)
print output
if( ret_val.returncode ):
print "RUN failed\n\n%s\n\n" % (errors)
success = False
if( errors ): log_file.write("\n\n%s\n\n" % errors)
Originally I was piping the run_command
through tee
so that a copy went directly to the log-file, and the stream still output directly to the terminal -- but that way I can't store any errors (to my knowlege).
最初我是run_command
通过管道tee
传输的,以便副本直接进入日志文件,并且流仍然直接输出到终端——但这样我就不能存储任何错误(据我所知)。
Edit:
编辑:
Temporary solution:
临时解决方案:
ret_val = subprocess.Popen( run_command, stdout=log_file, stderr=subprocess.PIPE, shell=True )
while not ret_val.poll():
log_file.flush()
then, in another terminal, run tail -f log.txt
(s.t. log_file = 'log.txt'
).
然后,在另一个终端中,运行tail -f log.txt
(st log_file = 'log.txt'
)。
采纳答案by Viktor Kerkez
You have two ways of doing this, either by creating an iterator from the read
or readline
functions and do:
您有两种方法可以做到这一点,或者通过从read
或readline
函数创建迭代器并执行以下操作:
import subprocess
import sys
with open('test.log', 'w') as f: # replace 'w' with 'wb' for Python 3
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
for c in iter(lambda: process.stdout.read(1), ''): # replace '' with b'' for Python 3
sys.stdout.write(c)
f.write(c)
or
或者
import subprocess
import sys
with open('test.log', 'w') as f: # replace 'w' with 'wb' for Python 3
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
for line in iter(process.stdout.readline, ''): # replace '' with b'' for Python 3
sys.stdout.write(line)
f.write(line)
Or you can create a reader
and a writer
file. Pass the writer
to the Popen
and read from the reader
或者您可以创建一个reader
和一个writer
文件。传递writer
给Popen
并从reader
import io
import time
import subprocess
import sys
filename = 'test.log'
with io.open(filename, 'wb') as writer, io.open(filename, 'rb', 1) as reader:
process = subprocess.Popen(command, stdout=writer)
while process.poll() is None:
sys.stdout.write(reader.read())
time.sleep(0.5)
# Read the remaining
sys.stdout.write(reader.read())
This way you will have the data written in the test.log
as well as on the standard output.
通过这种方式,您将test.log
在标准输出中和标准输出中写入数据。
The only advantage of the file approach is that your code doesn't block. So you can do whatever you want in the meantime and read whenever you want from the reader
in a non-blocking way. When you use PIPE
, read
and readline
functions will block until either one character is written to the pipe or a line is written to the pipe respectively.
文件方法的唯一优点是您的代码不会阻塞。因此,您可以在此期间做任何想做的事情,并reader
以非阻塞方式随时阅读。当您使用PIPE
,read
和readline
函数将阻塞,直到一个字符写入管道或一行分别写入管道。
回答by Guy Sirton
A good but "heavyweight" solution is to use Twisted - see the bottom.
一个好的但“重量级”的解决方案是使用 Twisted - 见底部。
If you're willing to live with only stdout something along those lines should work:
如果您愿意只使用标准输出,那么应该可以:
import subprocess
import sys
popenobj = subprocess.Popen(["ls", "-Rl"], stdout=subprocess.PIPE)
while not popenobj.poll():
stdoutdata = popenobj.stdout.readline()
if stdoutdata:
sys.stdout.write(stdoutdata)
else:
break
print "Return code", popenobj.returncode
(If you use read() it tries to read the entire "file" which isn't useful, what we really could use here is something that reads all the data that's in the pipe right now)
(如果你使用 read() 它会尝试读取整个“文件”,这是没有用的,我们真正可以在这里使用的是读取管道中所有数据的东西)
One might also try to approach this with threading, e.g.:
人们也可以尝试通过线程来解决这个问题,例如:
import subprocess
import sys
import threading
popenobj = subprocess.Popen("ls", stdout=subprocess.PIPE, shell=True)
def stdoutprocess(o):
while True:
stdoutdata = o.stdout.readline()
if stdoutdata:
sys.stdout.write(stdoutdata)
else:
break
t = threading.Thread(target=stdoutprocess, args=(popenobj,))
t.start()
popenobj.wait()
t.join()
print "Return code", popenobj.returncode
Now we could potentially add stderr as well by having two threads.
现在我们也可以通过有两个线程来添加 stderr。
Note however the subprocess docs discourage using these files directly and recommends to use communicate()
(mostly concerned with deadlocks which I think isn't an issue above) and the solutions are a little klunky so it really seems like the subprocess module isn't quite up to the job(also see: http://www.python.org/dev/peps/pep-3145/) and we need to look at something else.
但是请注意,子流程文档不鼓励直接使用这些文件并建议使用communicate()
(主要涉及死锁,我认为这不是上面的问题)并且解决方案有点笨拙,因此看起来子流程模块确实不太好工作(另见:http: //www.python.org/dev/peps/pep-3145/),我们需要看看别的东西。
A more involved solution is to use Twistedas shown here: https://twistedmatrix.com/documents/11.1.0/core/howto/process.html
更复杂的解决方案是使用Twisted,如下所示:https: //twistedmatrix.com/documents/11.1.0/core/howto/process.html
The way you do this with Twistedis to create your process using reactor.spawnprocess()
and providing a ProcessProtocol
that then processes output asynchronously. The Twisted sample Python code is here: https://twistedmatrix.com/documents/11.1.0/core/howto/listings/process/process.py
您使用Twisted执行此操作的方法是使用reactor.spawnprocess()
并提供 a创建您的流程ProcessProtocol
,然后异步处理输出。Twisted 示例 Python 代码在这里:https: //twistedmatrix.com/documents/11.1.0/core/howto/listings/process/process.py
回答by Alp
It looks like line-buffered output will work for you, in which case something like the following might suit. (Caveat: it's untested.) This will only give the subprocess's stdout in real time. If you want to have both stderr and stdout in real time, you'll have to do something more complex with select
.
看起来行缓冲输出对您有用,在这种情况下,以下内容可能适合。(警告:它未经测试。)这只会实时提供子进程的标准输出。如果您想实时使用 stderr 和 stdout,则必须使用select
.
proc = subprocess.Popen(run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
while proc.poll() is None:
line = proc.stdout.readline()
print line
log_file.write(line + '\n')
# Might still be data on stdout at this point. Grab any
# remainder.
for line in proc.stdout.read().split('\n'):
print line
log_file.write(line + '\n')
# Do whatever you want with proc.stderr here...
回答by torek
Executive Summary (or "tl;dr" version): it's easy when there's at most one subprocess.PIPE
, otherwise it's hard.
执行摘要(或“tl;dr”版本):最多只有一个就很容易subprocess.PIPE
,否则就很难。
It may be time to explain a bit about how subprocess.Popen
does its thing.
可能是时候解释一下subprocess.Popen
它的工作原理了。
(Caveat: this is for Python 2.x, although 3.x is similar; and I'm quite fuzzy on the Windows variant. I understand the POSIX stuff much better.)
(警告:这适用于 Python 2.x,虽然 3.x 类似;我对 Windows 变体非常模糊。我更了解 POSIX 的东西。)
The Popen
function needs to deal with zero-to-three I/O streams, somewhat simultaneously. These are denoted stdin
, stdout
, and stderr
as usual.
该Popen
函数需要同时处理零到三个 I/O 流。和往常一样stdin
,这些标记为、stdout
和stderr
。
You can provide:
您可以提供:
None
, indicating that you don't want to redirect the stream. It will inherit these as usual instead. Note that on POSIX systems, at least, this does not mean it will use Python'ssys.stdout
, just Python's actualstdout; see demo at end.- An
int
value. This is a "raw" file descriptor (in POSIX at least). (Side note:PIPE
andSTDOUT
are actuallyint
s internally, but are "impossible" descriptors, -1 and -2.) - A stream—really, any object with a
fileno
method.Popen
will find the descriptor for that stream, usingstream.fileno()
, and then proceed as for anint
value. subprocess.PIPE
, indicating that Python should create a pipe.subprocess.STDOUT
(forstderr
only): tell Python to use the same descriptor as forstdout
. This only makes sense if you provided a (non-None
) value forstdout
, and even then, it is only neededif you setstdout=subprocess.PIPE
. (Otherwise you can just provide the same argument you provided forstdout
, e.g.,Popen(..., stdout=stream, stderr=stream)
.)
None
,表示您不想重定向流。它将像往常一样继承这些。请注意,至少在 POSIX 系统上,这并不意味着它将使用 Python 的sys.stdout
,而只是 Python 的实际标准输出;见最后演示。- 一个
int
值。这是一个“原始”文件描述符(至少在 POSIX 中)。(旁注:PIPE
andSTDOUT
实际上int
在内部是s,但是是“不可能的”描述符,-1 和 -2。) - 流——实际上,任何带有
fileno
方法的对象。Popen
将找到该流的描述符,使用stream.fileno()
,然后继续查找int
值。 subprocess.PIPE
,表示 Python 应该创建一个管道。subprocess.STDOUT
(stderr
仅用于):告诉 Python 使用与 for 相同的描述符stdout
。这仅在您为 提供了(非None
)值时才有意义stdout
,即使如此,只有在您设置时才需要stdout=subprocess.PIPE
。(否则,您可以只提供您为 提供的相同参数stdout
,例如,Popen(..., stdout=stream, stderr=stream)
。)
The easiest cases (no pipes)
最简单的情况(无管道)
If you redirect nothing (leave all three as the default None
value or supply explicit None
), Pipe
has it quite easy. It just needs to spin off the subprocess and let it run. Or, if you redirect to a non-PIPE
—an int
or a stream's fileno()
—it's still easy, as the OS does all the work. Python just needs to spin off the subprocess, connecting its stdin, stdout, and/or stderr to the provided file descriptors.
如果您什么都不重定向(将所有三个都保留为默认None
值或提供显式None
),Pipe
则很容易。它只需要剥离子进程并让它运行。或者,如果您重定向到非PIPE
-anint
或流的fileno()
- 它仍然很容易,因为操作系统会完成所有工作。Python 只需要剥离子进程,将其 stdin、stdout 和/或 stderr 连接到提供的文件描述符。
The still-easy case: one pipe
仍然简单的案例:一根管子
If you redirect only one stream, Pipe
still has things pretty easy. Let's pick one stream at a time and watch.
如果你只重定向一个流,Pipe
事情仍然很容易。让我们一次选择一个流并观看。
Suppose you want to supply some stdin
, but let stdout
and stderr
go un-redirected, or go to a file descriptor. As the parent process, your Python program simply needs to use write()
to send data down the pipe. You can do this yourself, e.g.:
假设你想提供一些stdin
,但让stdout
和stderr
去未重定向,或去文件描述符。作为父进程,您的 Python 程序只需要用于通过write()
管道发送数据。您可以自己执行此操作,例如:
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE)
proc.stdin.write('here, have some data\n') # etc
or you can pass the stdin data to proc.communicate()
, which then does the stdin.write
shown above. There is no output coming back so communicate()
has only one other real job: it also closes the pipe for you. (If you don't call proc.communicate()
you must call proc.stdin.close()
to close the pipe, so that the subprocess knows there is no more data coming through.)
或者您可以将标准输入数据传递给proc.communicate()
,然后执行stdin.write
如上所示的操作。没有输出返回,因此communicate()
只有另一项真正的工作:它还为您关闭管道。(如果不调用proc.communicate()
,则必须调用proc.stdin.close()
以关闭管道,以便子进程知道没有更多数据通过。)
Suppose you want to capture stdout
but leave stdin
and stderr
alone. Again, it's easy: just call proc.stdout.read()
(or equivalent) until there is no more output. Since proc.stdout()
is a normal Python I/O stream you can use all the normal constructs on it, like:
假设您想捕获stdout
但离开stdin
并stderr
独自一人。同样,这很简单:只需调用proc.stdout.read()
(或等效的)直到没有更多输出。由于proc.stdout()
是一个普通的 Python I/O 流,你可以在它上面使用所有普通的结构,比如:
for line in proc.stdout:
or, again, you can use proc.communicate()
, which simply does the read()
for you.
或者,同样,您可以使用proc.communicate()
,它只是read()
为您做。
If you want to capture only stderr
, it works the same as with stdout
.
如果您只想捕获stderr
,它的工作原理与stdout
.
There's one more trick before things get hard. Suppose you want to capture stdout
, and also capture stderr
but on the same pipe as stdout:
在事情变得困难之前还有一个技巧。假设您想捕获stdout
,也捕获,stderr
但与标准输出在同一管道上:
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
In this case, subprocess
"cheats"! Well, it has to do this, so it's not really cheating: it starts the subprocess with both its stdout and its stderr directed into the (single) pipe-descriptor that feeds back to its parent (Python) process. On the parent side, there's again only a single pipe-descriptor for reading the output. All the "stderr" output shows up in proc.stdout
, and if you call proc.communicate()
, the stderr result (second value in the tuple) will be None
, not a string.
在这种情况下,subprocess
“作弊”!好吧,它必须这样做,所以它并不是真正的作弊:它启动子进程,将其 stdout 和 stderr 定向到(单个)管道描述符,该管道描述符反馈给其父(Python)进程。在父端,再次只有一个管道描述符用于读取输出。所有“stderr”输出都显示在 中proc.stdout
,如果您调用proc.communicate()
,则 stderr 结果(元组中的第二个值)将是None
,而不是字符串。
The hard cases: two or more pipes
硬箱:两个或更多管道
The problems all come about when you want to use at least two pipes. In fact, the subprocess
code itself has this bit:
当您想要使用至少两个管道时,所有问题都会出现。其实subprocess
代码本身就有这么一点:
def communicate(self, input=None):
...
# Optimization: If we are only using one pipe, or no pipe at
# all, using select() or threads is unnecessary.
if [self.stdin, self.stdout, self.stderr].count(None) >= 2:
But, alas, here we've made at least two, and maybe three, different pipes, so the count(None)
returns either 1 or 0. We must do things the hard way.
但是,唉,这里我们已经制作了至少两个,也许三个不同的管道,所以count(None)
返回 1 或 0。我们必须以艰难的方式做事。
On Windows, this uses threading.Thread
to accumulate results for self.stdout
and self.stderr
, and has the parent thread deliver self.stdin
input data (and then close the pipe).
在 Windows 上,这用于threading.Thread
累积self.stdout
和 的结果self.stderr
,并让父线程传递self.stdin
输入数据(然后关闭管道)。
On POSIX, this uses poll
if available, otherwise select
, to accumulate output and deliver stdin input. All this runs in the (single) parent process/thread.
在 POSIX 上,poll
如果可用,则使用,否则使用select
来累积输出并提供标准输入输入。所有这些都在(单个)父进程/线程中运行。
Threads or poll/select are needed here to avoid deadlock. Suppose, for instance, that we've redirected all three streams to three separate pipes. Suppose further that there's a small limit on how much data can be stuffed into to a pipe before the writing process is suspended, waiting for the reading process to "clean out" the pipe from the other end. Let's set that small limit to a single byte, just for illustration. (This is in fact how things work, except that the limit is much bigger than one byte.)
这里需要线程或轮询/选择以避免死锁。例如,假设我们已将所有三个流重定向到三个单独的管道。进一步假设在写入进程暂停之前,可以将多少数据塞入管道有一个小的限制,等待读取进程从另一端“清除”管道。让我们将那个小限制设置为单个字节,仅用于说明。(这实际上就是事情的工作方式,除了限制比一个字节大得多。)
If the parent (Python) process tries to write several bytes—say, 'go\n'
to proc.stdin
, the first byte goes in and then the second causes the Python process to suspend, waiting for the subprocess to read the first byte, emptying the pipe.
如果父 (Python) 进程尝试写入多个字节——比如'go\n'
to proc.stdin
,第一个字节进入,然后第二个字节导致 Python 进程挂起,等待子进程读取第一个字节,清空管道。
Meanwhile, suppose the subprocess decides to print a friendly "Hello! Don't Panic!" greeting. The H
goes into its stdout pipe, but the e
causes it to suspend, waiting for its parent to read that H
, emptying the stdout pipe.
同时,假设子进程决定打印一个友好的“你好!不要惊慌!” 问候。在H
进入它的标准输出管道,但e
导致其暂停,等待其家长阅读H
,排空stdout管道。
Now we're stuck: the Python process is asleep, waiting to finish saying "go", and the subprocess is also asleep, waiting to finish saying "Hello! Don't Panic!".
现在我们卡住了:Python 进程睡着了,等着说完“go”,子进程也睡着了,等着说完“你好!不要惊慌!”。
The subprocess.Popen
code avoids this problem with threading-or-select/poll. When bytes can go over the pipes, they go. When they can't, only a thread (not the whole process) has to sleep—or, in the case of select/poll, the Python process waits simultaneously for "can write" or "data available", writes to the process's stdin only when there is room, and reads its stdout and/or stderr only when data are ready. The proc.communicate()
code (actually _communicate
where the hairy cases are handled) returns once all stdin data (if any) have been sent and all stdout and/or stderr data have been accumulated.
该subprocess.Popen
代码通过线程或选择/轮询避免了这个问题。当字节可以通过管道时,它们就会离开。当它们不能时,只有一个线程(而不是整个进程)必须休眠——或者,在选择/轮询的情况下,Python 进程同时等待“可以写入”或“可用数据”,写入进程的标准输入仅当有空间时,并且仅在数据准备好时才读取其 stdout 和/或 stderr。一旦发送了所有标准输入数据(如果有)并且累积了所有标准输出和/或标准错误数据,proc.communicate()
代码(实际上_communicate
是处理多毛情况的地方)就会返回。
If you want to read both stdout
and stderr
on two different pipes (regardless of any stdin
redirection), you will need to avoid deadlock too. The deadlock scenario here is different—it occurs when the subprocess writes something long to stderr
while you're pulling data from stdout
, or vice versa—but it's still there.
如果你想在两个不同的管道上同时读取stdout
和读取stderr
(不管任何stdin
重定向),你也需要避免死锁。这里的死锁场景是不同的——它发生在stderr
当你从 中提取数据时子进程写了很长的东西stdout
,反之亦然——但它仍然存在。
The Demo
演示
I promised to demonstrate that, un-redirected, Python subprocess
es write to the underlying stdout, not sys.stdout
. So, here is some code:
我承诺要证明,未重定向的 Python subprocess
es 写入底层标准输出,而不是sys.stdout
. 所以,这里有一些代码:
from cStringIO import StringIO
import os
import subprocess
import sys
def show1():
print 'start show1'
save = sys.stdout
sys.stdout = StringIO()
print 'sys.stdout being buffered'
proc = subprocess.Popen(['echo', 'hello'])
proc.wait()
in_stdout = sys.stdout.getvalue()
sys.stdout = save
print 'in buffer:', in_stdout
def show2():
print 'start show2'
save = sys.stdout
sys.stdout = open(os.devnull, 'w')
print 'after redirect sys.stdout'
proc = subprocess.Popen(['echo', 'hello'])
proc.wait()
sys.stdout = save
show1()
show2()
When run:
运行时:
$ python out.py
start show1
hello
in buffer: sys.stdout being buffered
start show2
hello
Note that the first routine will fail if you add stdout=sys.stdout
, as a StringIO
object has no fileno
. The second will omit the hello
if you add stdout=sys.stdout
since sys.stdout
has been redirected to os.devnull
.
请注意,如果添加stdout=sys.stdout
,第一个例程将失败,因为StringIO
对象没有fileno
. 第二个将省略hello
if 您添加stdout=sys.stdout
因为sys.stdout
已被重定向到os.devnull
.
(If you redirect Python's file-descriptor-1, the subprocess willfollow that redirection. The open(os.devnull, 'w')
call produces a stream whose fileno()
is greater than 2.)
(如果您重定向 Python 的 file-descriptor-1,子进程将遵循该重定向。该open(os.devnull, 'w')
调用会生成一个fileno()
大于 2的流。)
回答by Vinay Sajip
If you're able to use third-party libraries, You might be able to use something like sarge
(disclosure: I'm its maintainer). This library allows non-blocking access to output streams from subprocesses - it's layered over the subprocess
module.
如果您能够使用第三方库,您可能可以使用类似的东西sarge
(披露:我是它的维护者)。这个库允许对来自子进程的输出流进行非阻塞访问——它在subprocess
模块上分层。
回答by Jughead
We can also use the default file iterator for reading stdout instead of using iter construct with readline().
我们还可以使用默认文件迭代器来读取标准输出,而不是使用带有 readline() 的 iter 构造。
import subprocess
import sys
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
for line in process.stdout:
sys.stdout.write(line)
回答by xaav
Why not set stdout
directly to sys.stdout
? And if you need to output to a log as well, then you can simply override the write method of f.
为什么不stdout
直接设置为sys.stdout
?如果您还需要输出到日志,那么您可以简单地覆盖 f 的 write 方法。
import sys
import subprocess
class SuperFile(open.__class__):
def write(self, data):
sys.stdout.write(data)
super(SuperFile, self).write(data)
f = SuperFile("log.txt","w+")
process = subprocess.Popen(command, stdout=f, stderr=f)
回答by t.animal
Here is a class which I'm using in one of my projects. It redirects output of a subprocess to the log. At first I tried simply overwriting the write-method but that doesn't work as the subprocess will never call it (redirection happens on filedescriptor level). So I'm using my own pipe, similar to how it's done in the subprocess-module. This has the advantage of encapsulating all logging/printing logic in the adapter and you can simply pass instances of the logger to Popen
: subprocess.Popen("/path/to/binary", stderr = LogAdapter("foo"))
这是我在我的一个项目中使用的一个类。它将子进程的输出重定向到日志。起初我尝试简单地覆盖写入方法,但这不起作用,因为子进程永远不会调用它(重定向发生在文件描述符级别)。所以我使用我自己的管道,类似于它在 subprocess-module 中的完成方式。这具有在适配器中封装所有日志记录/打印逻辑的优点,您可以简单地将记录器的实例传递给Popen
:subprocess.Popen("/path/to/binary", stderr = LogAdapter("foo"))
class LogAdapter(threading.Thread):
def __init__(self, logname, level = logging.INFO):
super().__init__()
self.log = logging.getLogger(logname)
self.readpipe, self.writepipe = os.pipe()
logFunctions = {
logging.DEBUG: self.log.debug,
logging.INFO: self.log.info,
logging.WARN: self.log.warn,
logging.ERROR: self.log.warn,
}
try:
self.logFunction = logFunctions[level]
except KeyError:
self.logFunction = self.log.info
def fileno(self):
#when fileno is called this indicates the subprocess is about to fork => start thread
self.start()
return self.writepipe
def finished(self):
"""If the write-filedescriptor is not closed this thread will
prevent the whole program from exiting. You can use this method
to clean up after the subprocess has terminated."""
os.close(self.writepipe)
def run(self):
inputFile = os.fdopen(self.readpipe)
while True:
line = inputFile.readline()
if len(line) == 0:
#no new data was added
break
self.logFunction(line.strip())
If you don't need logging but simply want to use print()
you can obviously remove large portions of the code and keep the class shorter. You could also expand it by an __enter__
and __exit__
method and call finished
in __exit__
so that you could easily use it as context.
如果您不需要日志记录而只是想使用,print()
您显然可以删除大部分代码并使类更短。您还可以通过__enter__
and__exit__
方法扩展它并调用它finished
,__exit__
以便您可以轻松地将其用作上下文。
回答by sivann
All of the above solutions I tried failed either to separate stderr and stdout output, (multiple pipes) or blocked forever when the OS pipe buffer was full which happens when the command you are running outputs too fast (there is a warning for this on python poll() manual of subprocess). The only reliable way I found was through select, but this is a posix-only solution:
我尝试的所有上述解决方案都未能将 stderr 和 stdout 输出分开(多个管道),或者在操作系统管道缓冲区已满时永远阻塞,这发生在您运行的命令输出太快时(python 上对此有警告) poll() 子进程手册)。我找到的唯一可靠的方法是通过 select,但这是一个 posix-only 解决方案:
import subprocess
import sys
import os
import select
# returns command exit status, stdout text, stderr text
# rtoutput: show realtime output while running
def run_script(cmd,rtoutput=0):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
poller = select.poll()
poller.register(p.stdout, select.POLLIN)
poller.register(p.stderr, select.POLLIN)
coutput=''
cerror=''
fdhup={}
fdhup[p.stdout.fileno()]=0
fdhup[p.stderr.fileno()]=0
while sum(fdhup.values()) < len(fdhup):
try:
r = poller.poll(1)
except select.error, err:
if err.args[0] != EINTR:
raise
r=[]
for fd, flags in r:
if flags & (select.POLLIN | select.POLLPRI):
c = os.read(fd, 1024)
if rtoutput:
sys.stdout.write(c)
sys.stdout.flush()
if fd == p.stderr.fileno():
cerror+=c
else:
coutput+=c
else:
fdhup[fd]=1
return p.poll(), coutput.strip(), cerror.strip()
回答by kabirbaidhya
In addition to all these answer, one simple approach could also be as follows:
除了所有这些答案之外,一种简单的方法也可以如下:
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
while process.stdout.readable():
line = process.stdout.readline()
if not line:
break
print(line.strip())
Loop through the readable stream as long as it's readable and if it gets an empty result, stop it.
只要可读流,就循环遍历可读流,如果结果为空,则停止它。
The key here is that readline()
returns a line (with \n
at the end) as long as there's an output and empty if it's really at the end.
这里的关键是只要有输出就readline()
返回一行(\n
在末尾),如果它真的在末尾则为空。
Hope this helps someone.
希望这可以帮助某人。