将标准输出重定向到 Python 中的文件?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/4675728/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Redirect stdout to a file in Python?
提问by
How do I redirect stdout to an arbitrary file in Python?
如何将标准输出重定向到 Python 中的任意文件?
When a long-running Python script (e.g, web application) is started from within the ssh session and backgounded, and the ssh session is closed, the application will raise IOError and fail the moment it tries to write to stdout. I needed to find a way to make the application and modules output to a file rather than stdout to prevent failure due to IOError. Currently, I employ nohup to redirect output to a file, and that gets the job done, but I was wondering if there was a way to do it without using nohup, out of curiosity.
当一个长时间运行的 Python 脚本(例如,web 应用程序)从 ssh 会话中启动并在后台运行,并且 ssh 会话关闭时,应用程序将引发 IOError 并在尝试写入 stdout 时失败。我需要找到一种方法将应用程序和模块输出到文件而不是标准输出,以防止由于 IOError 导致的失败。目前,我使用 nohup 将输出重定向到一个文件,这样就完成了工作,但出于好奇,我想知道是否有办法在不使用 nohup 的情况下做到这一点。
I have already tried sys.stdout = open('somefile', 'w'), but this does not seem to prevent some external modules from still outputting to terminal (or maybe the sys.stdout = ...line did not fire at all). I know it should work from simpler scripts I've tested on, but I also didn't have time yet to test on a web application yet.
我已经尝试过了sys.stdout = open('somefile', 'w'),但这似乎并不能阻止某些外部模块仍然输出到终端(或者该sys.stdout = ...线路根本没有触发)。我知道它应该可以从我测试过的更简单的脚本中工作,但我还没有时间在 Web 应用程序上进行测试。
采纳答案by moinudin
If you want to do the redirection within the Python script, setting sys.stdoutto a file object does the trick:
如果您想在 Python 脚本中进行重定向,设置sys.stdout为文件对象即可:
import sys
sys.stdout = open('file', 'w')
print('test')
A far more common method is to use shell redirection when executing (same on Windows and Linux):
一种更常见的方法是在执行时使用 shell 重定向(在 Windows 和 Linux 上相同):
$ python foo.py > file
回答by Cat Plus Plus
import sys
sys.stdout = open('stdout.txt', 'w')
回答by Yuda Prawira
you can try this too much better
你可以试试这个更好
import sys
class Logger(object):
def __init__(self, filename="Default.log"):
self.terminal = sys.stdout
self.log = open(filename, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
sys.stdout = Logger("yourlogfilename.txt")
print "Hello world !" # this is should be saved in yourlogfilename.txt
回答by Yam Marcovic
The other answers didn't cover the case where you want forked processes to share your new stdout.
其他答案没有涵盖您希望分叉进程共享新标准输出的情况。
To do that:
要做到这一点:
from os import open, close, dup, O_WRONLY
old = dup(1)
close(1)
open("file", O_WRONLY) # should open on 1
..... do stuff and then restore
close(1)
dup(old) # should dup to 1
close(old) # get rid of left overs
回答by Gerli
Quoted from PEP 343 -- The "with" Statement(added import statement):
引自PEP 343——“with”语句(添加导入语句):
Redirect stdout temporarily:
暂时重定向标准输出:
import sys
from contextlib import contextmanager
@contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
Used as follows:
用法如下:
with open(filename, "w") as f:
with stdout_redirected(f):
print "Hello world"
This isn't thread-safe, of course, but neither is doing this same dance manually. In single-threaded programs (for example in scripts) it is a popular way of doing things.
当然,这不是线程安全的,但也不是手动执行相同的舞蹈。在单线程程序中(例如在脚本中),这是一种流行的做事方式。
回答by jpaugh
Programs written in other languages (e.g. C) have to do special magic (called double-forking) expressly to detach from the terminal (and to prevent zombie processes). So, I think the best solution is to emulate them.
用其他语言(例如 C)编写的程序必须执行特殊的魔术(称为双分叉)以明确地与终端分离(并防止僵尸进程)。所以,我认为最好的解决方案是模仿它们。
A plus of re-executing your program is, you can choose redirections on the command-line, e.g. /usr/bin/python mycoolscript.py 2>&1 1>/dev/null
重新执行程序的一个好处是,您可以在命令行上选择重定向,例如 /usr/bin/python mycoolscript.py 2>&1 1>/dev/null
See this post for more info: What is the reason for performing a double fork when creating a daemon?
有关更多信息,请参阅此帖子:创建守护进程时执行双叉的原因是什么?
回答by jfs
There is contextlib.redirect_stdout()functionin Python 3.4:
Python 3.4 中有contextlib.redirect_stdout()函数:
from contextlib import redirect_stdout
with open('help.txt', 'w') as f:
with redirect_stdout(f):
print('it now prints to `help.text`')
It is similar to:
它类似于:
import sys
from contextlib import contextmanager
@contextmanager
def redirect_stdout(new_target):
old_target, sys.stdout = sys.stdout, new_target # replace sys.stdout
try:
yield new_target # run some code with the replaced stdout
finally:
sys.stdout = old_target # restore to the previous value
that can be used on earlier Python versions. The latter version is not reusable. It can be made one if desired.
可以在早期的 Python 版本上使用。后一个版本不可重复使用。如果需要,它可以制作一个。
It doesn't redirect the stdout at the file descriptors level e.g.:
它不会在文件描述符级别重定向标准输出,例如:
import os
from contextlib import redirect_stdout
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, redirect_stdout(f):
print('redirected to a file')
os.write(stdout_fd, b'not redirected')
os.system('echo this also is not redirected')
b'not redirected'and 'echo this also is not redirected'are not redirected to the output.txtfile.
b'not redirected'并且'echo this also is not redirected'不会重定向到output.txt文件。
To redirect at the file descriptor level, os.dup2()could be used:
要在文件描述符级别重定向,os.dup2()可以使用:
import os
import sys
from contextlib import contextmanager
def fileno(file_or_fd):
fd = getattr(file_or_fd, 'fileno', lambda: file_or_fd)()
if not isinstance(fd, int):
raise ValueError("Expected a file (`.fileno()`) or a file descriptor")
return fd
@contextmanager
def stdout_redirected(to=os.devnull, stdout=None):
if stdout is None:
stdout = sys.stdout
stdout_fd = fileno(stdout)
# copy stdout_fd before it is overwritten
#NOTE: `copied` is inheritable on Windows when duplicating a standard stream
with os.fdopen(os.dup(stdout_fd), 'wb') as copied:
stdout.flush() # flush library buffers that dup2 knows nothing about
try:
os.dup2(fileno(to), stdout_fd) # $ exec >&to
except ValueError: # filename
with open(to, 'wb') as to_file:
os.dup2(to_file.fileno(), stdout_fd) # $ exec > to
try:
yield stdout # allow code to be run with the redirected stdout
finally:
# restore stdout to its previous value
#NOTE: dup2 makes stdout_fd inheritable unconditionally
stdout.flush()
os.dup2(copied.fileno(), stdout_fd) # $ exec >&copied
The same example works now if stdout_redirected()is used instead of redirect_stdout():
如果stdout_redirected()使用 代替 ,则相同的示例现在有效redirect_stdout():
import os
import sys
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, stdout_redirected(f):
print('redirected to a file')
os.write(stdout_fd, b'it is redirected now\n')
os.system('echo this is also redirected')
print('this is goes back to stdout')
The output that previously was printed on stdout now goes to output.txtas long as stdout_redirected()context manager is active.
output.txt只要stdout_redirected()上下文管理器处于活动状态,以前打印在 stdout 上的输出现在就会继续。
Note: stdout.flush()does not flush
C stdio buffers on Python 3 where I/O is implemented directly on read()/write()system calls. To flush all open C stdio output streams, you could call libc.fflush(None)explicitly if some C extension uses stdio-based I/O:
注意:stdout.flush()不会在 Python 3 上刷新 C stdio 缓冲区,其中 I/O 直接在read()/write()系统调用上实现。要刷新所有打开的 C stdio 输出流,libc.fflush(None)如果某些 C 扩展使用基于 stdio 的 I/O ,您可以显式调用:
try:
import ctypes
from ctypes.util import find_library
except ImportError:
libc = None
else:
try:
libc = ctypes.cdll.msvcrt # Windows
except OSError:
libc = ctypes.cdll.LoadLibrary(find_library('c'))
def flush(stream):
try:
libc.fflush(None)
stream.flush()
except (AttributeError, ValueError, IOError):
pass # unsupported
You could use stdoutparameter to redirect other streams, not only sys.stdoute.g., to merge sys.stderrand sys.stdout:
您可以使用stdout参数来重定向其他流,不仅sys.stdout例如合并sys.stderr和sys.stdout:
def merged_stderr_stdout(): # $ exec 2>&1
return stdout_redirected(to=sys.stdout, stdout=sys.stderr)
Example:
例子:
from __future__ import print_function
import sys
with merged_stderr_stdout():
print('this is printed on stdout')
print('this is also printed on stdout', file=sys.stderr)
Note: stdout_redirected()mixes buffered I/O (sys.stdoutusually) and unbuffered I/O (operations on file descriptors directly). Beware, there could be bufferingissues.
注意:stdout_redirected()混合缓冲 I/O(sys.stdout通常)和非缓冲 I/O(直接操作文件描述符)。当心,可能存在缓冲问题。
To answer, your edit: you could use python-daemonto daemonize your script and use loggingmodule (as @erikb85 suggested) instead of printstatements and merely redirecting stdout for your long-running Python script that you run using nohupnow.
要回答,您的编辑:您可以python-daemon用来守护您的脚本并使用logging模块(如@erikb85 建议的那样)而不是print语句,并且仅为您nohup现在运行的长时间运行的 Python 脚本重定向标准输出。
回答by vaidik
Based on this answer: https://stackoverflow.com/a/5916874/1060344, here is another way I figured out which I use in one of my projects. For whatever you replace sys.stderror sys.stdoutwith, you have to make sure that the replacement complies with fileinterface, especially if this is something you are doing because stderr/stdout are used in some other library that is not under your control. That library may be using other methods of file object.
基于这个答案:https: //stackoverflow.com/a/5916874/1060344,这是我在我的一个项目中使用的另一种方法。对于您替换sys.stderr或sys.stdout使用的任何内容,您必须确保替换符合file接口,特别是如果这是您正在做的事情,因为 stderr/stdout 用于其他不受您控制的库中。该库可能正在使用文件对象的其他方法。
Check out this way where I still let everything go do stderr/stdout (or any file for that matter) and also send the message to a log file using Python's logging facility (but you can really do anything with this):
看看这种方式,我仍然让一切都去执行 stderr/stdout(或任何与此相关的文件),并使用 Python 的日志记录工具将消息发送到日志文件(但你真的可以用它做任何事情):
class FileToLogInterface(file):
'''
Interface to make sure that everytime anything is written to stderr, it is
also forwarded to a file.
'''
def __init__(self, *args, **kwargs):
if 'cfg' not in kwargs:
raise TypeError('argument cfg is required.')
else:
if not isinstance(kwargs['cfg'], config.Config):
raise TypeError(
'argument cfg should be a valid '
'PostSegmentation configuration object i.e. '
'postsegmentation.config.Config')
self._cfg = kwargs['cfg']
kwargs.pop('cfg')
self._logger = logging.getlogger('access_log')
super(FileToLogInterface, self).__init__(*args, **kwargs)
def write(self, msg):
super(FileToLogInterface, self).write(msg)
self._logger.info(msg)
回答by duncan
You need a terminal multiplexer like either tmuxor GNU screen
您需要一个终端多路复用器,例如tmux或GNU screen
I'm surprised that a small comment by Ryan Amos' to the original question is the only mention of a solution far preferable to all the others on offer, no matter how clever the python trickery may be and how many upvotes they've received. Further to Ryan's comment, tmux is a nice alternative to GNU screen.
我很惊讶瑞安·阿莫斯 (Ryan Amos) 对原始问题的一个小评论是唯一提到的解决方案比提供的所有其他解决方案更可取,无论 python 技巧可能多么聪明以及他们收到了多少赞成票。除了 Ryan 的评论之外,tmux 是 GNU screen 的一个很好的替代品。
But the principle is the same: if you ever find yourself wanting to leave a terminal job running while you log-out, head to the cafe for a sandwich, pop to the bathroom, go home (etc) and then later, reconnect to your terminal session from anywhere or any computer as though you'd never been away, terminal multiplexers are theanswer. Think of them as VNC or remote desktop for terminal sessions. Anything else is a workaround. As a bonus, when the boss and/or partner comes in and you inadvertently ctrl-w / cmd-w your terminal window instead of your browser window with its dodgy content, you won't have lost the last 18 hours-worth of processing!
但原则是一样的:如果您发现自己想在注销时让终端工作继续运行,请前往咖啡馆吃三明治,去洗手间,回家(等),然后重新连接到您的从任何地方或任何计算机终端会话,就好像你从来没有离开,终端多路复用器的答案。将它们视为终端会话的 VNC 或远程桌面。其他任何方法都是解决方法。作为奖励,当老板和/或合作伙伴进来并且您无意中 ctrl-w / cmd-w 您的终端窗口而不是带有可疑内容的浏览器窗口时,您将不会丢失过去 18 小时的处理时间!
回答by damio
Here is a variation of Yuda Prawiraanswer:
这是Yuda Prawira答案的变体:
- implement
flush()and all the file attributes - write it as a contextmanager
- capture
stderralso
- 实现
flush()和所有文件属性 - 将其编写为上下文管理器
stderr也捕获
.
.
import contextlib, sys
@contextlib.contextmanager
def log_print(file):
# capture all outputs to a log file while still printing it
class Logger:
def __init__(self, file):
self.terminal = sys.stdout
self.log = file
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def __getattr__(self, attr):
return getattr(self.terminal, attr)
logger = Logger(file)
_stdout = sys.stdout
_stderr = sys.stderr
sys.stdout = logger
sys.stderr = logger
try:
yield logger.log
finally:
sys.stdout = _stdout
sys.stderr = _stderr
with log_print(open('mylogfile.log', 'w')):
print('hello world')
print('hello world on stderr', file=sys.stderr)
# you can capture the output to a string with:
# with log_print(io.StringIO()) as log:
# ....
# print('[captured output]', log.getvalue())

