Python 我可以在 GPU 上运行 Keras 模型吗?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/45662253/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 17:12:29  来源:igfitidea点击:

Can I run Keras model on gpu?

pythontensorflowkerasjupyter

提问by Ryan

I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?

我正在运行 Keras 模型,提交截止日期为 36 小时,如果我在 cpu 上训练我的模型,大约需要 50 小时,有没有办法在 gpu 上运行 Keras?

I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed.

我正在使用 Tensorflow 后端并在我的 Jupyter 笔记本上运行它,但没有安装 anaconda。

回答by Vikash Singh

Yes you can run keras models on GPU. Few things you will have to check first.

是的,您可以在 GPU 上运行 keras 模型。您必须先检查几件事。

  1. your system has GPU (Nvidia. As AMD doesn't work yet)
  2. You have installed the GPU version of tensorflow
  3. You have installed CUDA installation instructions
  4. Verify that tensorflow is running with GPU check if GPU is working
  1. 你的系统有 GPU(Nvidia。因为 AMD 还没有工作)
  2. 您已经安装了 tensorflow 的 GPU 版本
  3. 您已安装 CUDA安装说明
  4. 验证 tensorflow 是否正在运行 GPU检查 GPU 是否正常工作

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

OR

或者

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

output will be something like this:

输出将是这样的:

[
  name: "/cpu:0"device_type: "CPU",
  name: "/gpu:0"device_type: "GPU"
]

Once all this is done your model will run on GPU:

完成所有这些后,您的模型将在 GPU 上运行:

To Check if keras(>=2.1.1) is using GPU:

检查 keras(>=2.1.1) 是否使用 GPU:

from keras import backend as K
K.tensorflow_backend._get_available_gpus()

All the best.

祝一切顺利。

回答by johncasey

Sure. I suppose that you have already installed TensorFlow for GPU.

当然。我想您已经为 GPU 安装了 TensorFlow。

You need to add the following block after importing keras. I am working on a machine which have 56 core cpu, and a gpu.

导入keras后需要添加如下块。我正在一台具有 56 个核心 cpu 和一个 gpu 的机器上工作。

import keras
import tensorflow as tf


config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 56} ) 
sess = tf.Session(config=config) 
keras.backend.set_session(sess)

Of course, this usage enforces my machines maximum limits. You can decrease cpu and gpu consumption values.

当然,这种用法会强制执行我的机器最大限制。您可以降低 cpu 和 gpu 消耗值。

回答by Tensorflow Support

2.0 Compatible Answer: While above mentioned answer explain in detail on how to use GPU on Keras Model, I want to explain how it can be done for Tensorflow Version 2.0.

2.0 兼容答案:虽然上面提到的答案详细解释了如何在 Keras 模型上使用 GPU,但我想解释如何为Tensorflow Version 2.0.

To know how many GPUs are available, we can use the below code:

要知道有多少 GPU 可用,我们可以使用以下代码:

print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

To find out which devices your operations and tensors are assigned to, put tf.debugging.set_log_device_placement(True)as the first statement of your program.

要找出您的操作和张量分配给哪些设备,请将其tf.debugging.set_log_device_placement(True)作为程序的第一条语句。

Enabling device placement logging causes any Tensor allocations or operations to be printed. For example, running the below code:

启用设备放置日志会导致打印任何张量分配或操作。例如,运行以下代码:

tf.debugging.set_log_device_placement(True)

# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)

print(c)

gives the Output shown below:

给出如下所示的输出:

Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32)

在设备 /job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32)

For more information, refer this link

有关更多信息,请参阅此链接

回答by Kevin Jarvis

Of course. if you are running on Tensorflow or CNTk backends, your code will run on your GPU devices defaultly.But if Theano backends, you can use following

当然。如果您在 Tensorflow 或 CNTk 后端上运行,您的代码将默认在您的 GPU 设备上运行。但如果是 Theano 后端,您可以使用以下

Theano flags:

"THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py"

Theano 标志:

“THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py”

回答by Tae-Sung Shin

See if your script is running GPU in Task manager. If not, suspect your CUDA version is right one for the tensorflow version you are using, as the other answers suggested already.

查看您的脚本是否在任务管理器中运行 GPU。如果不是,请怀疑您的 CUDA 版本是否适合您正在使用的 tensorflow 版本,因为其他答案已经建议。

Additionally, a proper CUDA DNN library for the CUDA version is required to run GPU with tensorflow. Download/extract it from hereand put the DLL (e.g., cudnn64_7.dll) into CUDA bin folder (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin).

此外,需要一个适用于 CUDA 版本的适当的 CUDA DNN 库才能使用 tensorflow 运行 GPU。从这里下载/解压并将 DLL(例如 cudnn64_7.dll)放入 CUDA bin 文件夹(例如,C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin)。