Python 禁用 Tensorflow 调试信息
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/35911252/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Disable Tensorflow debugging information
提问by Ghilas BELHADJ
By debugging information I mean what TensorFlow shows in my terminal about loaded libraries and found devices etc. not Python errors.
通过调试信息,我指的是 TensorFlow 在我的终端中显示的关于加载的库和找到的设备等,而不是 Python 错误。
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Graphics Device
major: 5 minor: 2 memoryClockRate (GHz) 1.0885
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.83GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Graphics Device, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
...
回答by mwweb
You can disable all debugging logs using os.environ
:
您可以使用os.environ
以下命令禁用所有调试日志:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
Tested on tf 0.12 and 1.0
在 tf 0.12 和 1.0 上测试
In details,
详细来说,
0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed
回答by craymichael
2.0 Update (10/8/19)Setting TF_CPP_MIN_LOG_LEVEL
should still work (see below in v0.12+ update), but there is currently an issue open (see issue #31870). If setting TF_CPP_MIN_LOG_LEVEL
does not work for you (again, see below), try doing the following to set the log level:
2.0 更新 (10/8/19)设置TF_CPP_MIN_LOG_LEVEL
应该仍然有效(见下面 v0.12+ 更新),但目前有一个问题未解决(见问题 #31870)。如果设置TF_CPP_MIN_LOG_LEVEL
对您不起作用(再次参见下文),请尝试执行以下操作来设置日志级别:
import tensorflow as tf
tf.get_logger().setLevel('INFO')
In addition, please see the documentation on tf.autograph.set_verbosity
which sets the verbosity of autograph log messages - for example:
此外,请参阅有关tf.autograph.set_verbosity
设置签名日志消息详细程度的文档- 例如:
# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)
v0.12+ Update (5/20/17), Working through TF 2.0+:
v0.12+ 更新 (5/20/17),通过 TF 2.0+:
In TensorFlow 0.12+, per this issue, you can now control logging via the environmental variable called TF_CPP_MIN_LOG_LEVEL
; it defaults to 0 (all logs shown) but can be set to one of the following values under the Level
column.
在 TensorFlow 0.12+ 中,根据此问题,您现在可以通过名为 的环境变量控制日志记录TF_CPP_MIN_LOG_LEVEL
;它默认为 0(显示所有日志),但可以设置为Level
列下的以下值之一。
Level | Level for Humans | Level Description
-------|------------------|------------------------------------
0 | DEBUG | [Default] Print all messages
1 | INFO | Filter out INFO messages
2 | WARNING | Filter out INFO & WARNING messages
3 | ERROR | Filter out all messages
See the following generic OS example using Python:
请参阅以下使用 Python 的通用操作系统示例:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}
import tensorflow as tf
To be thorough, you call also set the level for the Python tf_logging
module, which is used in e.g. summary ops, tensorboard, various estimators, etc.
为了彻底,您还调用了 Pythontf_logging
模块的设置级别,该模块用于例如摘要操作、张量板、各种估计器等。
# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR) # or any {DEBUG, INFO, WARN, ERROR, FATAL}
For 1.14 you will receive warnings if you do not change to use the v1 API as follows:
对于 1.14,如果您不更改为使用 v1 API,您将收到警告,如下所示:
# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) # or any {DEBUG, INFO, WARN, ERROR, FATAL}
For Prior Versions of TensorFlow or TF-Learn Logging (v0.11.x or lower):对于之前版本的 TensorFlow 或 TF-Learn Logging(v0.11.x 或更低版本):
View the page below for information on TensorFlow logging; with the new update, you're able to set the logging verbosity to either DEBUG
, INFO
, WARN
, ERROR
, or FATAL
. For example:
查看以下页面以获取有关 TensorFlow 日志记录的信息;与新的更新,你能在日志记录级别设置为DEBUG
,INFO
,WARN
,ERROR
,或FATAL
。例如:
tf.logging.set_verbosity(tf.logging.ERROR)
The page additionally goes over monitors which can be used with TF-Learn models. Here is the page.
该页面还介绍了可与 TF-Learn 模型一起使用的监视器。这是页面。
This doesn'tblock all logging, though (only TF-Learn). I have two solutions; one is a 'technically correct' solution (Linux) and the other involves rebuilding TensorFlow.
但是,这不会阻止所有日志记录(仅 TF-Learn)。我有两个解决方案;一个是“技术上正确”的解决方案(Linux),另一个涉及重建 TensorFlow。
script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'
For the other, please see this answerwhich involves modifying source and rebuilding TensorFlow.
对于另一个,请参阅此答案,其中涉及修改源代码和重建 TensorFlow。
回答by Pedro Lopes
I have had this problem as well (on tensorflow-0.10.0rc0
), but could not fix the excessive nose tests logging problem via the suggested answers.
我也有这个问题(在tensorflow-0.10.0rc0
),但无法通过建议的答案解决过多的鼻子测试记录问题。
I managed to solve this by probing directly into the tensorflow logger. Not the most correct of fixes, but works great and only pollutes the test files which directly or indirectly import tensorflow:
我设法通过直接探测 tensorflow 记录器来解决这个问题。不是最正确的修复,但效果很好,只会污染直接或间接导入 tensorflow 的测试文件:
# Place this before directly or indirectly importing tensorflow
import logging
logging.getLogger("tensorflow").setLevel(logging.WARNING)
回答by serv-inc
For compatibility with Tensorflow 2.0, you can use tf.get_logger
为了与 Tensorflow 2.0 兼容,您可以使用tf.get_logger
import logging
tf.get_logger().setLevel(logging.ERROR)
回答by Wikunia
As TF_CPP_MIN_LOG_LEVEL
didn't work for me you can try:
由于TF_CPP_MIN_LOG_LEVEL
对我不起作用,您可以尝试:
tf.logging.set_verbosity(tf.logging.WARN)
Worked for me in tensorflow v1.6.0
在 tensorflow v1.6.0 中为我工作
回答by estevo
Usual python3 log manager works for me with tensorflow==1.11.0:
通常的python3日志管理器适用于tensorflow==1.11.0:
import logging
logging.getLogger('tensorflow').setLevel(logging.INFO)
回答by Mandy007
I solved with this post Cannot remove all warnings #27045 , and the solution was:
我用这篇文章解决了无法删除所有警告 #27045,解决方案是:
import logging
logging.getLogger('tensorflow').disabled = True
回答by dturvene
Yeah, I'm using tf 2.0-beta and want to enable/disable the default logging. The environment variable and methods in tf1.X don't seem to exist anymore.
是的,我正在使用 tf 2.0-beta 并希望启用/禁用默认日志记录。tf1.X 中的环境变量和方法似乎已经不存在了。
I stepped around in PDB and found this to work:
我在 PDB 中四处走动,发现它可以工作:
# close the TF2 logger
tf2logger = tf.get_logger()
tf2logger.error('Close TF2 logger handlers')
tf2logger.root.removeHandler(tf2logger.root.handlers[0])
I then add my own logger API (in this case file-based)
然后我添加我自己的记录器 API(在这种情况下基于文件)
logtf = logging.getLogger('DST')
logtf.setLevel(logging.DEBUG)
# file handler
logfile='/tmp/tf_s.log'
fh = logging.FileHandler(logfile)
fh.setFormatter( logging.Formatter('fh %(asctime)s %(name)s %(filename)s:%(lineno)d :%(message)s') )
logtf.addHandler(fh)
logtf.info('writing to %s', logfile)
回答by Onur Demiray
for tensorflow 2.1.0, following code works fine.
对于 tensorflow 2.1.0,以下代码工作正常。
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
回答by Tyler
To add some flexibility here, you can achieve more fine-grained control over the level of logging by writing a function that filters out messages however you like:
为了在这里增加一些灵活性,您可以通过编写一个过滤掉您喜欢的消息的函数来实现对日志记录级别的更细粒度的控制:
logging.getLogger('tensorflow').addFilter(my_filter_func)
where my_filter_func
accepts a LogRecord
object as input [LogRecord
docs] and
returns zero if you want the message thrown out; nonzero otherwise.
wheremy_filter_func
接受一个LogRecord
对象作为输入 [ LogRecord
docs] 并返回零,如果你想抛出消息;否则非零。
Here's an example filter that only keeps every nth info message (Python 3 due
to the use of nonlocal
here):
这是一个仅保留第 n 个信息消息的示例过滤器(由于使用了nonlocal
此处的Python 3 ):
def keep_every_nth_info(n):
i = -1
def filter_record(record):
nonlocal i
i += 1
return int(record.levelname != 'INFO' or i % n == 0)
return filter_record
# Example usage for TensorFlow:
logging.getLogger('tensorflow').addFilter(keep_every_nth_info(5))
All of the above has assumed that TensorFlow has set up its logging state already. You can ensure this without side effects by calling tf.logging.get_verbosity()
before adding a filter.
以上所有内容都假设 TensorFlow 已经设置了其日志记录状态。您可以通过tf.logging.get_verbosity()
在添加过滤器之前调用来确保这一点没有副作用。