Python 更改 TensorFlow 中的默认 GPU
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/36668467/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Change default GPU in TensorFlow
提问by MBZ
Based on the documentation, the default GPU is the one with the lowest id:
根据文档,默认 GPU 是具有最低 id 的 GPU:
If you have more than one GPU in your system, the GPU with the lowest ID will be selected by default.
如果您的系统中有多个 GPU,则默认选择 ID 最低的 GPU。
Is it possible to change this default from command line or one line of code?
是否可以从命令行或一行代码更改此默认值?
回答by mrry
Suever's answercorrectly shows how to pin your operations to a particular GPU. However, if you are running multiple TensorFlow programs on the same machine, it is recommended that you set the CUDA_VISIBLE_DEVICES
environment variable to expose different GPUs before starting the processes. Otherwise, TensorFlow will attempt to allocate almost the entire memory on all of the available GPUs, which prevents other processes from using those GPUs (even if the current process isn't using them).
Suever 的回答正确显示了如何将您的操作固定到特定 GPU。但是,如果您在同一台机器上运行多个 TensorFlow 程序,建议您CUDA_VISIBLE_DEVICES
在启动进程之前设置环境变量以暴露不同的 GPU。否则,TensorFlow 将尝试在所有可用 GPU 上分配几乎全部内存,这会阻止其他进程使用这些 GPU(即使当前进程没有使用它们)。
Note that if you use CUDA_VISIBLE_DEVICES
, the device names "/gpu:0"
, "/gpu:1"
, etc. refer to the 0th and 1st visibledevices in the current process.
请注意,如果使用CUDA_VISIBLE_DEVICES
,则设备名称"/gpu:0"
、"/gpu:1"
等是指当前进程中第 0 和第 1 个可见设备。
回答by Franck Dernoncourt
Just to be clear regarding the use of the environment variable CUDA_VISIBLE_DEVICES
:
只是要清楚环境变量的使用CUDA_VISIBLE_DEVICES
:
To run a script my_script.py
on GPU 1 only, in the Linux terminal you can use the following command:
要my_script.py
仅在 GPU 1 上运行脚本,您可以在 Linux 终端中使用以下命令:
username@server:/scratch/coding/src$ CUDA_VISIBLE_DEVICES=1 python my_script.py
Moreexamples illustrating the syntax:
说明语法的更多示例:
Environment Variable Syntax Results
CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
CUDA_VISIBLE_DEVICES="" No GPU will be visible
FYI:
供参考:
回答by Suever
As is stated in the documentation, you can use tf.device('/gpu:id')
to specify a device other than the default.
正如文档中所述,您可以使用tf.device('/gpu:id')
来指定默认设备以外的设备。
# This will use the second GPU on your system
with tf.device('/gpu:1'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print sess.run(c)
回答by Mohammad
If you want to run your code on the second GPU,it assumes that your machine has two GPUs, You can do the following trick.
如果你想在第二个 GPU 上运行你的代码,它假设你的机器有两个 GPU,你可以执行以下技巧。
open Terminal
open tmuxby typing tmux(you can install it by sudo apt-get install tmux)
- run this line of code in tmux: CUDA_VISIBLE_DEVICES=1 python YourScript.py
打开终端
通过键入tmux打开tmux(您可以通过sudo apt-get install tmux安装它 )
- 在tmux 中运行这行代码:CUDA_VISIBLE_DEVICES=1 python YourScript.py
Note: By default, tensorflow uses the first GPU, so with above trick, you can run your another code on the second GPU, separately.
注意:默认情况下,tensorflow 使用第一个 GPU,因此使用上述技巧,您可以在第二个 GPU 上单独运行另一个代码。
Hope it would be helpful!!
希望它会有所帮助!!