Python scikit-learn 会使用 GPU 吗?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/41567895/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Will scikit-learn utilize GPU?
提问by blue-sky
Reading implementation of scikit-learn in tensroflow : http://learningtensorflow.com/lesson6/and scikit-learn : http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlI'm struggling to decide which implementation to use.
在 tensroflow 中阅读 scikit-learn 的实现:http://learningtensorflow.com/lesson6/和 scikit-learn:http: //scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html我是努力决定使用哪种实现。
scikit-learn is installed as part of the tensorflow docker container so can use either implementation.
scikit-learn 作为 tensorflow docker 容器的一部分安装,因此可以使用任一实现。
Reason to use scikit-learn :
使用 scikit-learn 的原因:
scikit-learn contains less boiler plate than the tensorflow implementation.
scikit-learn 包含的样板比 tensorflow 实现少。
Reason to use tensorflow :
使用 tensorflow 的原因:
If running on Nvidia GPU the algorithm wilk be run against in parallel , I'm not sure if scikit-learn will utilise all available GPU's ?
如果在 Nvidia GPU 上运行,算法将并行运行,我不确定 scikit-learn 是否会利用所有可用的 GPU?
Reading https://www.quora.com/What-are-the-main-differences-between-TensorFlow-and-SciKit-Learn
阅读https://www.quora.com/What-are-the-main-differences-between-TensorFlow-and-SciKit-Learn
TensorFlow is more low-level; basically, the Lego bricks that help you to implement machine learning algorithms whereas scikit-learn offers you off-the-shelf algorithms, e.g., algorithms for classification such as SVMs, Random Forests, Logistic Regression, and many, many more. TensorFlow really shines if you want to implement deep learning algorithms, since it allows you to take advantage of GPUs for more efficient training.
TensorFlow 更底层;基本上,乐高积木可以帮助您实现机器学习算法,而 scikit-learn 为您提供现成的算法,例如用于分类的算法,如 SVM、随机森林、逻辑回归等等。如果您想实现深度学习算法,TensorFlow 真的很出色,因为它允许您利用 GPU 进行更高效的训练。
This statement re-enforces my assertion that "scikit-learn contains less boiler plate than the tensorflow implementation" but also suggests scikit-learn will not utilise all available GPU's ?
该声明再次强化了我的断言,即“scikit-learn 包含的样板比 tensorflow 实现少”,但也暗示 scikit-learn 不会利用所有可用的 GPU 吗?
回答by Ivan De Paz Centeno
Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default none of both are going to use GPU, especially if it is running inside Docker, unless you use nvidia-dockerand an image capable of doing it.
Tensorflow 仅在针对 Cuda 和 CuDNN 构建时才使用 GPU。默认情况下,两者都不会使用 GPU,尤其是当它在 Docker 中运行时,除非您使用nvidia-docker和能够执行此操作的图像。
Scikit-learn is not intended to be used as a deep-learning framework, and seems that it doesn't support GPU computations.
Scikit-learn 不打算用作深度学习框架,而且它似乎不支持 GPU 计算。
Why is there no support for deep or reinforcement learning / Will there be support for deep or reinforcement learning in scikit-learn?
Deep learning and reinforcement learning both require a rich vocabulary to define an architecture, with deep learning additionally requiring GPUs for efficient computing. However, neither of these fit within the design constraints of scikit-learn; as a result, deep learning and reinforcement learning are currently out of scope for what scikit-learn seeks to achieve.
为什么不支持深度学习或强化学习 / scikit-learn 中会支持深度学习或强化学习吗?
深度学习和强化学习都需要丰富的词汇来定义架构,而深度学习还需要 GPU 来进行高效计算。然而,这些都不符合 scikit-learn 的设计限制;因此,深度学习和强化学习目前超出了 scikit-learn 寻求实现的范围。
Will you add GPU support in scikit-learn?
No, or at least not in the near future. The main reason is that GPU support will introduce many software dependencies and introduce platform specific issues. scikit-learn is designed to be easy to install on a wide variety of platforms. Outside of neural networks, GPUs don't play a large role in machine learning today, and much larger gains in speed can often be achieved by a careful choice of algorithms.
你会在 scikit-learn 中添加 GPU 支持吗?
不,或者至少在不久的将来不会。主要原因是 GPU 支持会引入许多软件依赖项并引入特定于平台的问题。scikit-learn 旨在易于安装在各种平台上。在神经网络之外,GPU 在今天的机器学习中并没有发挥重要作用,并且通过谨慎选择算法通常可以实现更大的速度提升。
Extracted from http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support
摘自http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support
回答by Guillaume Chevalier
Yes, if you use TensorFlow or PyTorch within Scikit-Learn pipelines by using Neuraxle.
是的,如果您通过使用 Neuraxle 在 Scikit-Learn 管道中使用 TensorFlow 或PyTorch。
Neuraxle is an extension of Scikit-Learn to make it more compatible with all deep learning libraries.
Neuraxle 是 Scikit-Learn 的扩展,使其与所有深度学习库更加兼容。
Here is a full project example from A to Z where TensorFlow is used with Neuraxle as if it was used with Scikit-Learn.
这是一个从 A 到 Z 的完整项目示例,其中TensorFlow 与 Neuraxle 一起使用,就像与 Scikit-Learn 一起使用一样。
Here is another practical example where TensorFlow is used within a scikit-learn-like pipeline
这是另一个在 scikit-learn-like 管道中使用 TensorFlow 的实际示例
The trick is performed by using Neuraxle-TensorFlowor Neuraxle-PyTorch.
该技巧是通过使用Neuraxle-TensorFlow或Neuraxle-PyTorch 来执行的。
Why so?
为什么这样?
Using one of Neuraxle-TensorFlow or Neuraxle-PyTorch will provide you with a saver to allow your thing to be serialized correctly. You want it to be serialized correctly to be able to ensure compatibility between scikit-learn and your Deep Learning framework when it comes time to save or parallelize things and so forth. You can read how Neuraxle solves this with savers here.
使用 Neuraxle-TensorFlow 或 Neuraxle-PyTorch 之一将为您提供一个保护程序,让您的东西正确序列化。您希望它正确序列化,以便在需要保存或并行化事物等时确保 scikit-learn 和您的深度学习框架之间的兼容性。您可以在此处阅读 Neuraxle 如何通过储户解决此问题。