Python 如何检查 keras 是否使用 GPU 版本的 tensorflow?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/44544766/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-20 00:07:15  来源:igfitidea点击:

How do I check if keras is using gpu version of tensorflow?

pythontensorflowneural-networkkeras

提问by humble

When I run a keras script, I get the following output:

当我运行 keras 脚本时,我得到以下输出:

Using TensorFlow backend.
2017-06-14 17:40:44.621761: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use SSE4.1 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621783: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use SSE4.2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621788: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621791: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621795: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use FMA instructions, but these are 
available 
on your machine and could speed up CPU computations.
2017-06-14 17:40:44.721911: I 
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful 
NUMA node read from SysFS had negative value (-1), but there must be 
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:40:44.722288: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 
with properties: 
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 3.69GiB
2017-06-14 17:40:44.722302: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0 
2017-06-14 17:40:44.722307: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0:   Y 
2017-06-14 17:40:44.722312: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating 
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M, 
pci bus id: 0000:0a:00.0)

What does this mean? Am I using GPU or CPU version of tensorflow?

这是什么意思?我使用的是 GPU 还是 CPU 版本的 tensorflow?

Before installing keras, I was working with the GPU version of tensorflow.

在安装 keras 之前,我正在使用 GPU 版本的 tensorflow。

Also sudo pip3 listshows tensorflow-gpu(1.1.0)and nothing like tensorflow-cpu.

sudo pip3 list显示tensorflow-gpu(1.1.0)和没什么像tensorflow-cpu

Running the command mentioned on [this stackoverflow question], gives the following:

运行 [this stackoverflow question] 中提到的命令,会得到以下结果:

The TensorFlow library wasn't compiled to use SSE4.1 instructions, 
but these are available on your machine and could speed up CPU 
computations.
2017-06-14 17:53:31.424793: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use SSE4.2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424803: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424812: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424820: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use FMA instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.540959: I 
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful 
NUMA node read from SysFS had negative value (-1), but there must be 
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:53:31.541359: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 
with properties: 
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 128.12MiB
2017-06-14 17:53:31.541407: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0 
2017-06-14 17:53:31.541420: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0:   Y 
2017-06-14 17:53:31.541441: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating 
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M, 
pci bus id: 0000:0a:00.0)
2017-06-14 17:53:31.547902: E 
tensorflow/stream_executor/cuda/cuda_driver.cc:893] failed to 
allocate 128.12M (134348800 bytes) from device: 
CUDA_ERROR_OUT_OF_MEMORY
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce 
GTX 850M, pci bus id: 0000:0a:00.0
2017-06-14 17:53:31.549482: I 
tensorflow/core/common_runtime/direct_session.cc:257] Device 
mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce 
GTX 850M, pci bus id: 0000:0a:00.0

回答by Wilmar van Ommeren

You are using the GPU version. You can list the available tensorflow devices with (also check thisquestion):

您正在使用 GPU 版本。您可以列出可用的 tensorflow 设备(另请检查问题):

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) # list of DeviceAttributes

EDIT:

编辑:

With tensorflow >= 1.4 you can run the followingfunction:

使用 tensorflow >= 1.4,您可以运行以下函数:

import tensorflow as tf
tf.test.is_gpu_available() # True/False

# Or only check for gpu's with cuda support
tf.test.is_gpu_available(cuda_only=True) 

EDIT 2:

编辑2:

The above function is deprecated in tensorflow > 2.1. Instead you should use the following function:

上面的函数在tensorflow > 2.1. 相反,您应该使用以下函数:

import tensorflow as tf
tf.config.list_physical_devices('GPU')


NOTE:

笔记:

In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (with tf.device("..")), tensorflow will automatically pick your gpu!

在您的情况下,cpu 和 gpu 都可用,如果您使用 tensorflow 的 cpu 版本,则不会列出 gpu。在您的情况下,无需设置您的 tensorflow 设备 ( with tf.device("..")),tensorflow 将自动选择您的 GPU!

In addition, your sudo pip3 listclearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like tensorflow(1.1.0).

此外,您sudo pip3 list清楚地表明您正在使用 tensorflow-gpu。如果您有 tensoflow cpu 版本,则名称将类似于tensorflow(1.1.0).

Check thisissue for information about the warnings.

检查问题以获取有关警告的信息。

回答by Paul Williams

A lot of things have to go right in order for Keras to use the GPU. Put this near the top of your jupyter notebook:

为了让 Keras 使用 GPU,很多事情都必须做对。把它放在你的 jupyter notebook 顶部附近:

# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())

# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0

# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))


NOTE:With the release of TensorFlow 2.0, Keras is now included as part of the TF API.

注意:随着 TensorFlow 2.0 的发布,Keras 现在包含在 TF API 中。

回答by Ashok Kumar Jayaraman

To find out which devices your operations and tensors are assigned to, create the session with log_device_placement configuration option set to True.

要找出您的操作和张量分配给哪些设备,请在 log_device_placement 配置选项设置为 True 的情况下创建会话。

# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

You should see the following output:

您应该看到以下输出:

Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/device:GPU:0
a: /job:localhost/replica:0/task:0/device:GPU:0
MatMul: /job:localhost/replica:0/task:0/device:GPU:0
[[ 22.  28.]
 [ 49.  64.]]

For more details, please refer the link Using GPU with tensorflow

有关更多详细信息,请参阅将GPU 与 tensorflow 一起使用的链接