Linux 将 subprocess.Popen 中的 stdout 保存到文件,并将更多内容写入文件
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3190825/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Saving stdout from subprocess.Popen to file, plus writing more stuff to the file
提问by jasper77
I'm writing a python script that uses subprocess.Popen to execute two programs (from compiled C code) which each produce stdout. The script gets that output and saves it to a file. Because the output is sometimes large enough to overwhelm subprocess.PIPE, causing the script to hang, I send the stdout directly to the log file. I want to have my script write something to the beginning and end of the file, and between the two subprocess.Popen calls. However, when I look at my log file, anything I wrote to the log file from the script is all together at the top of the file, followed by all the executable stdout. How can I interleave my added text to the file?
我正在编写一个 python 脚本,它使用 subprocess.Popen 来执行两个程序(来自编译的 C 代码),每个程序都生成 stdout。该脚本获取该输出并将其保存到文件中。因为输出有时大到压倒subprocess.PIPE,导致脚本挂起,我直接把stdout发送到日志文件。我想让我的脚本在文件的开头和结尾以及两个 subprocess.Popen 调用之间写一些东西。然而,当我查看我的日志文件时,我从脚本写入日志文件的任何内容都集中在文件的顶部,然后是所有可执行的标准输出。如何将添加的文本插入到文件中?
def run(cmd, logfile):
p = subprocess.Popen(cmd, shell=True, universal_newlines=True, stdout=logfile)
return p
def runTest(path, flags, name):
log = open(name, "w")
print >> log, "Calling executable A"
a_ret = run(path + "executable_a_name" + flags, log)
print >> log, "Calling executable B"
b_ret = run(path + "executable_b_name" + flags, log)
print >> log, "More stuff"
log.close()
The log file has: Calling executable A Calling executable B More stuff [... stdout from both executables ...]
日志文件有:调用可执行文件 A 调用可执行文件 B 更多内容 [... 来自两个可执行文件的 stdout ...]
Is there a way I can flush A's stdout to the log after calling Popen, for example? One more thing that might be relevant: Executable A starts then pends on B, and after B prints stuff and finishes, A then prints more stuff and finishes.
例如,有没有办法在调用 Popen 后将 A 的标准输出刷新到日志中?另一件可能相关的事情:可执行文件 A 开始然后挂起 B,在 B 打印内容并完成之后,A 然后打印更多内容并完成。
I'm using Python 2.4 on RHE Linux.
我在 RHE Linux 上使用 Python 2.4。
采纳答案by Benno
You could call .wait() on each Popen object in order to be sure that it's finished and then call log.flush(). Maybe something like this:
您可以在每个 Popen 对象上调用 .wait() 以确保它已完成,然后调用 log.flush()。也许是这样的:
def run(cmd, logfile):
p = subprocess.Popen(cmd, shell=True, universal_newlines=True, stdout=logfile)
ret_code = p.wait()
logfile.flush()
return ret_code
If you need to interact with the Popen object in your outer function you could move the .wait() call to there instead.
如果您需要与外部函数中的 Popen 对象交互,您可以将 .wait() 调用移到那里。
回答by Chris B.
You need to wait until the process is finished before you continue. I've also converted the code to use a context manager, which is cleaner.
您需要等到该过程完成才能继续。我还将代码转换为使用上下文管理器,它更简洁。
def run(cmd, logfile):
p = subprocess.Popen(cmd, shell=True, universal_newlines=True, stdout=logfile)
p.wait()
return p
def runTest(path, flags, name):
with open(name, "w") as log:
print >> log, "Calling executable A"
a_ret = run(path + "executable_a_name" + flags, log)
print >> log, "Calling executable B"
b_ret = run(path + "executable_b_name" + flags, log)
print >> log, "More stuff"
回答by Peter Lyons
I say just keep it real simple. Pseudo code basic logic:
我说只要保持简单。伪代码基本逻辑:
write your start messages to logA
execute A with output to logA
write your in-between messages to logB
execute B with output to logB
write your final messages to logB
when A & B finish, write content of logB to the end of logA
delete logB
回答by jfs
As I understand it A
program waits for B
to do its thing and A
exits only after B
exits.
据我所知,A
程序等待B
做它的事情,A
只有在B
退出后才退出。
If B
can start without A
running then you could start the processes in the reverse order:
如果B
可以在不A
运行的情况下启动,那么您可以以相反的顺序启动进程:
from os.path import join as pjoin
from subprocess import Popen
def run_async(cmd, logfile):
print >>log, "calling", cmd
p = Popen(cmd, stdout=logfile)
print >>log, "started", cmd
return p
def runTest(path, flags, name):
log = open(name, "w", 1) # line-buffered
print >>log, 'calling both processes'
pb = run_async([pjoin(path, "executable_b_name")] + flags.split(), log)
pa = run_async([pjoin(path, "executable_a_name")] + flags.split(), log)
print >>log, 'started both processes'
pb.wait()
print >>log, 'process B ended'
pa.wait()
print >>log, 'process A ended'
log.close()
Note: calling log.flush()
in the main processes has no effect on the file buffers in the child processes.
注意:log.flush()
在主进程中调用对子进程中的文件缓冲区没有影响。
If child processes use block-buffering for stdout then you could try to force them to flush sooner using pexpect, pty, or stdbuf(it assumes that the processes use line-buffering if run interactively or they use C stdio library for I/O).
如果子进程对 stdout 使用块缓冲,那么您可以尝试使用pexpect、pty 或 stdbuf强制它们更快地刷新(假设进程以交互方式运行时使用行缓冲,或者它们使用 C stdio 库进行 I/O) .