Python 可以强制使用 Tensorflow 后端的 Keras 随意使用 CPU 或 GPU 吗?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/40690598/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 23:49:44  来源:igfitidea点击:

Can Keras with Tensorflow backend be forced to use CPU or GPU at will?

pythonmachine-learningtensorflowkeras

提问by mikal94305

I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.

我安装了带有 Tensorflow 后端和 CUDA 的 Keras。我有时想按需强制 Keras 使用 CPU。不用说在虚拟环境中安装单独的仅 CPU 的 Tensorflow 就可以做到这一点吗?如果是这样怎么办?如果后端是 Theano,则可以设置标志,但我还没有听说过可通过 Keras 访问的 Tensorflow 标志。

回答by Martin Thoma

If you want to force Keras to use CPU

如果你想强制 Keras 使用 CPU

Way 1

方式一

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""

before Keras / Tensorflow is imported.

在导入 Keras / Tensorflow 之前。

Way 2

方式二

Run your script as

运行你的脚本

$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py

See also

也可以看看

  1. https://github.com/keras-team/keras/issues/152
  2. https://github.com/fchollet/keras/issues/4613
  1. https://github.com/keras-team/keras/issues/152
  2. https://github.com/fchollet/keras/issues/4613

回答by RACKGNOME

A rather separable way of doing this is to use

这样做的一个相当可分离的方法是使用

import tensorflow as tf
from keras import backend as K

num_cores = 4

if GPU:
    num_GPU = 1
    num_CPU = 1
if CPU:
    num_CPU = 1
    num_GPU = 0

config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,
                        inter_op_parallelism_threads=num_cores, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}
                       )

session = tf.Session(config=config)
K.set_session(session)

Here, with booleansGPUand CPU, we indicate whether we would like to run our code with the GPU or CPU by rigidly defining the number of GPUs and CPUs the Tensorflow session is allowed to access. The variables num_GPUand num_CPUdefine this value. num_coresthen sets the number of CPU cores available for usage via intra_op_parallelism_threadsand inter_op_parallelism_threads.

在这里,使用booleansGPUCPU,我们通过严格定义允许 Tensorflow 会话访问的 GPU 和 CPU 的数量来指示我们是要使用 GPU 还是 CPU 运行我们的代码。变量num_GPUnum_CPU定义此值。num_cores然后通过intra_op_parallelism_threads和设置可使用的 CPU 内核数inter_op_parallelism_threads

The intra_op_parallelism_threadsvariable dictates the number of threads a parallel operation in a single node in the computation graph is allowed to use (intra). While the inter_ops_parallelism_threadsvariable defines the number of threads accessible for parallel operations across the nodes of the computation graph (inter).

intra_op_parallelism_threads变量指示允许计算图中单个节点中的并行操作(内部)使用的线程数。而该inter_ops_parallelism_threads变量定义了跨计算图(inter)节点的并行操作可访问的线程数。

allow_soft_placementallows for operations to be run on the CPU if any of the following criterion are met:

allow_soft_placement如果满足以下任一条件,则允许在 CPU 上运行操作:

  1. there is no GPU implementation for the operation

  2. there are no GPU devices known or registered

  3. there is a need to co-locate with other inputs from the CPU

  1. 该操作没有 GPU 实现

  2. 没有已知或注册的 GPU 设备

  3. 需要与来自 CPU 的其他输入一起定位

All of this is executed in the constructor of my class before any other operations, and is completely separable from any model or other code I use.

所有这些都在我的类的构造函数中在任何其他操作之前执行,并且与我使用的任何模型或其他代码完全分离。

Note: This requires tensorflow-gpuand cuda/cudnnto be installed because the option is given to use a GPU.

注意:这需要安装tensorflow-gpucuda/ cudnn,因为提供了使用 GPU 的选项。

Refs:

参考:

回答by Neuraleptic

This worked for me (win10), place before you import keras:

这对我有用(win10),在导入 keras 之前放置:

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

回答by harshlal028

Just import tensortflow and use keras, it's that easy.

只需导入 tensorflow 并使用 keras,就这么简单。

import tensorflow as tf
# your code here
with tf.device('/gpu:0'):
    model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)

回答by sygi

As per keras tutorial, you can simply use the same tf.devicescope as in regular tensorflow:

根据 keras教程,您可以简单地使用与tf.device常规张量流相同的范围:

with tf.device('/gpu:0'):
    x = tf.placeholder(tf.float32, shape=(None, 20, 64))
    y = LSTM(32)(x)  # all ops in the LSTM layer will live on GPU:0

with tf.device('/cpu:0'):
    x = tf.placeholder(tf.float32, shape=(None, 20, 64))
    y = LSTM(32)(x)  # all ops in the LSTM layer will live on CPU:0

回答by DDz

I just spent some time figure it out. Thoma's answer is not complete. Say your program is test.py, you want to use gpu0 to run this program, and keep other gpus free.

我只是花了一些时间才弄清楚。托马的回答并不完整。假设你的程序是test.py,你想使用 gpu0 来运行这个程序,并保持其他 gpu 空闲。

You should write CUDA_VISIBLE_DEVICES=0 python test.py

你应该写 CUDA_VISIBLE_DEVICES=0 python test.py

Notice it's DEVICESnot DEVICE

注意DEVICES不是DEVICE

回答by learner

For people working on PyCharm, and for forcing CPU, you can add the following line in the Run/Debug configuration, under Environment variables:

对于使用 PyCharm 和强制 CPU 的人员,您可以在运行/调试配置中的环境变量下添加以下行:

<OTHER_ENVIRONMENT_VARIABLES>;CUDA_VISIBLE_DEVICES=-1