如何构建和使用 Google TensorFlow C++ api
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/33620794/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to build and use Google TensorFlow C++ api
提问by theideasmith
I'm really eager to start using Google's new Tensorflow library in C++. The website and docs are just really unclear in terms of how to build the project's C++ API and I don't know where to start.
我真的很想开始在 C++ 中使用 Google 的新 Tensorflow 库。网站和文档在如何构建项目的 C++ API 方面真的不清楚,我不知道从哪里开始。
Can someone with more experience help by discovering and sharing a guide to using tensorflow's C++ API?
有更多经验的人可以通过发现和分享使用 tensorflow 的 C++ API 的指南来提供帮助吗?
采纳答案by mrry
To get started, you should download the source code from Github, by following the instructions here(you'll need Bazeland a recent version of GCC).
首先,您应该按照此处的说明从 Github 下载源代码(您需要Bazel和最新版本的 GCC)。
The C++ API (and the backend of the system) is in tensorflow/core
. Right now, only the C++ Session interface, and the C APIare being supported. You can use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a GraphDef
protocol buffer. There is also an experimental feature for building graphs in C++, but this is currently not quite as full-featured as the Python API (e.g. no support for auto-differentiation at present). You can see an example program that builds a small graph in C++ here.
C++ API(和系统的后端)在tensorflow/core
. 目前,仅支持C++ Session interface和C API。您可以使用其中任何一个来执行使用 Python API 构建并序列化到GraphDef
协议缓冲区的TensorFlow 图。在 C++ 中还有一个用于构建图形的实验性功能,但目前它不像 Python API 那样功能齐全(例如,目前不支持自动微分)。您可以在此处查看使用 C++ 构建小图的示例程序。
The second part of the C++ API is the API for adding a new OpKernel
, which is the class containing implementations of numerical kernels for CPU and GPU. There are numerous examples of how to build these in tensorflow/core/kernels
, as well as a tutorial for adding a new op in C++.
C++ API 的第二部分是用于添加 new 的 API,OpKernel
该类包含 CPU 和 GPU 数值内核的实现。有许多关于如何在 中构建这些的示例tensorflow/core/kernels
,以及在 C++ 中添加新操作的教程。
回答by Jim
To add to @mrry's post, I put together a tutorial that explains how to load a TensorFlow graph with the C++ API. It's very minimal and should help you understand how all of the pieces fit together. Here's the meat of it:
为了补充@mrry 的帖子,我整理了一个教程,解释了如何使用 C++ API 加载 TensorFlow 图。它非常小,应该可以帮助您了解所有部分是如何组合在一起的。这是它的肉:
Requirements:
要求:
- Bazelinstalled
- Clone TensorFlow repo
- 安装了巴泽尔
- 克隆 TensorFlow 存储库
Folder structure:
文件夹结构:
tensorflow/tensorflow/|project name|/
tensorflow/tensorflow/|project name|/|project name|.cc (e.g. https://gist.github.com/jimfleming/4202e529042c401b17b7)
tensorflow/tensorflow/|project name|/BUILD
tensorflow/tensorflow/|project name|/
tensorflow/tensorflow/|project name|/|project name|.cc (e.g. https://gist.github.com/jimfleming/4202e529042c401b17b7)
tensorflow/tensorflow/|project name|/BUILD
BUILD:
建造:
cc_binary(
name = "<project name>",
srcs = ["<project name>.cc"],
deps = [
"//tensorflow/core:tensorflow",
]
)
Two caveats for which there are probably workarounds:
可能有解决方法的两个警告:
- Right now, building things needs to happen withinthe TensorFlow repo.
- The compiled binary is huge (103MB).
- 现在,需要在TensorFlow 存储库中进行构建。
- 编译后的二进制文件很大(103MB)。
https://medium.com/@jimfleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f
https://medium.com/@jimfleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f
回答by cjweeks
If you wish to avoid both building your projects with Bazel and generating a large binary, I have assembled a repository instructing the usage of the TensorFlow C++ library with CMake. You can find it here. The general ideas are as follows:
如果您希望避免使用 Bazel 构建项目和生成大型二进制文件,我已经组装了一个存储库,指导如何使用 TensorFlow C++ 库和 CMake。你可以在这里找到它。大体思路如下:
- Clone the TensorFlow repository.
- Add a build rule to
tensorflow/BUILD
(the provided ones do not include all of the C++ functionality). - Build the TensorFlow shared library.
- Install specific versions of Eigen and Protobuf, or add them as external dependencies.
- Configure your CMake project to use the TensorFlow library.
- 克隆 TensorFlow 存储库。
- 将构建规则添加到
tensorflow/BUILD
(提供的规则不包括所有 C++ 功能)。 - 构建 TensorFlow 共享库。
- 安装特定版本的 Eigen 和 Protobuf,或将它们添加为外部依赖项。
- 配置您的 CMake 项目以使用 TensorFlow 库。
回答by lababidi
First, after installing protobuf
and eigen
, you'd like to build Tensorflow:
首先,在安装protobuf
and 之后eigen
,您想构建 Tensorflow:
./configure
bazel build //tensorflow:libtensorflow_cc.so
Then Copy the following include headers and dynamic shared library to /usr/local/lib
and /usr/local/include
:
然后将以下头文件和动态共享库复制到/usr/local/lib
和/usr/local/include
:
mkdir /usr/local/include/tf
cp -r bazel-genfiles/ /usr/local/include/tf/
cp -r tensorflow /usr/local/include/tf/
cp -r third_party /usr/local/include/tf/
cp -r bazel-bin/libtensorflow_cc.so /usr/local/lib/
Lastly, compile using an example:
最后,使用示例进行编译:
g++ -std=c++11 -o tf_example \
-I/usr/local/include/tf \
-I/usr/local/include/eigen3 \
-g -Wall -D_DEBUG -Wshadow -Wno-sign-compare -w \
-L/usr/local/lib/libtensorflow_cc \
`pkg-config --cflags --libs protobuf` -ltensorflow_cc tf_example.cpp
回答by Renan Wille
If you are thinking into using Tensorflow c++ api on a standalone package you probably will need tensorflow_cc.so ( There is also a c api version tensorflow.so ) to build the c++ version you can use:
如果您正在考虑在独立包上使用 Tensorflow c++ api,您可能需要 tensorflow_cc.so (还有 ac api 版本 tensorflow.so )来构建您可以使用的 c++ 版本:
bazel build -c opt //tensorflow:libtensorflow_cc.so
Note1: If you want to add intrinsics support you can add this flags as: --copt=-msse4.2 --copt=-mavx
注1:如果要添加内在支持,可以将此标志添加为: --copt=-msse4.2 --copt=-mavx
Note2: If you are thinking into using OpenCV on your project as well, there is an issue when using both libs together (tensorflow issue) and you should use --config=monolithic
.
注意2:如果您也考虑在您的项目中使用 OpenCV,则在同时使用两个库时会出现问题(tensorflow 问题),您应该使用--config=monolithic
.
After building the library you need to add it to your project. To do that you can include this paths:
构建库后,您需要将其添加到您的项目中。为此,您可以包含以下路径:
tensorflow
tensorflow/bazel-tensorflow/external/eigen_archive
tensorflow/bazel-tensorflow/external/protobuf_archive/src
tensorflow/bazel-genfiles
And link the library to your project:
并将库链接到您的项目:
tensorflow/bazel-bin/tensorflow/libtensorflow_framework.so (unused if you build with --config=monolithic)
tensorflow/bazel-bin/tensorflow/libtensorflow_cc.so
And when you are building your project you should also specify to your compiler that you are going to use c++11 standards.
当您构建项目时,您还应该向编译器指定您将使用 c++11 标准。
Side Note: Paths relative to tensorflow version 1.5 (You may need to check if in your version anything changed).
旁注:相对于 tensorflow 1.5 版的路径(您可能需要检查您的版本中是否有任何更改)。
Also this link helped me a lot into finding all this infos: link
此链接也帮助我找到了所有这些信息:链接
回答by kecsap
If you don't want to build Tensorflow yourself and your operating system is Debian or Ubuntu, you can download prebuilt packages with the Tensorflow C/C++ libraries. This distribution can be used for C/C++ inference with CPU, GPU support is not included:
如果您不想自己构建 Tensorflow,并且您的操作系统是 Debian 或 Ubuntu,您可以下载带有 Tensorflow C/C++ 库的预构建包。此发行版可用于 CPU 的 C/C++ 推理,不包括 GPU 支持:
https://github.com/kecsap/tensorflow_cpp_packaging/releases
https://github.com/kecsap/tensorflow_cpp_packaging/releases
There are instructions written how to freeze a checkpoint in Tensorflow (TFLearn) and load this model for inference with the C/C++ API:
有关于如何在 Tensorflow (TFLearn) 中冻结检查点并加载此模型以使用 C/C++ API 进行推理的说明:
https://github.com/kecsap/tensorflow_cpp_packaging/blob/master/README.md
https://github.com/kecsap/tensorflow_cpp_packaging/blob/master/README.md
Beware: I am the developer of this Github project.
当心:我是这个 Github 项目的开发者。
回答by Floop
If you don't mind using CMake, there is also tensorflow_ccproject that builds and installs TF C++ API for you, along with convenient CMake targets you can link against. The project README contains an example and Dockerfiles you can easily follow.
如果您不介意使用 CMake,还有tensorflow_cc项目可以为您构建和安装 TF C++ API,以及您可以链接的方便的 CMake 目标。项目 README 包含一个示例和您可以轻松遵循的 Dockerfile。
回答by Ivan Seidel
You can use this ShellScript to install (most) of it's dependencies, clone, build, compile and get all the necessary files into ../src/includes
folder:
您可以使用此 ShellScript 安装(大部分)它的依赖项、克隆、构建、编译并将所有必需的文件放入../src/includes
文件夹:
https://github.com/node-tensorflow/node-tensorflow/blob/master/tools/install.sh
https://github.com/node-tensorflow/node-tensorflow/blob/master/tools/install.sh
回答by Martin Pecka
I use a hack/workaround to avoid having to build the whole TF library myself (which saves both time (it's set up in 3 minutes), disk space, installing dev dependencies, and size of the resulting binary). It's officially unsupported, but works well if you just want to quickly jump in.
我使用 hack/workaround 来避免自己构建整个 TF 库(这样可以节省时间(在 3 分钟内完成设置)、磁盘空间、安装开发依赖项以及生成的二进制文件的大小)。它正式不受支持,但如果您只想快速加入,它会很好用。
Install TF through pip (pip install tensorflow
or pip install tensorflow-gpu
). Then find its library _pywrap_tensorflow.so
(TF 0.* - 1.0) or _pywrap_tensorflow_internal.so
(TF 1.1+). In my case (Ubuntu) it's located at /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so
. Then create a symlink to this library called lib_pywrap_tensorflow.so
somewhere where your build system finds it (e.g. /usr/lib/local
). The prefix lib
is important! You can also give it another lib*.so
name - if you call it libtensorflow.so
, you may get better compatibility with other programs written to work with TF.
通过 pip (pip install tensorflow
或pip install tensorflow-gpu
)安装 TF 。然后找到它的库_pywrap_tensorflow.so
(TF 0.* - 1.0) 或_pywrap_tensorflow_internal.so
(TF 1.1+)。在我的情况下(Ubuntu)它位于/usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so
. 然后创建一个指向这个库的符号链接,该库被称为lib_pywrap_tensorflow.so
构建系统找到它的地方(例如/usr/lib/local
)。前缀lib
很重要!你也可以给它另一个lib*.so
名字——如果你叫它libtensorflow.so
,你可能会更好地与其他为使用 TF 而编写的程序兼容。
Then create a C++ project as you are used to (CMake, Make, Bazel, whatever you like).
然后按照习惯创建一个 C++ 项目(CMake、Make、Bazel,任何你喜欢的)。
And then you're ready to just link against this library to have TF available for your projects (and you also have to link against python2.7
libraries)! In CMake, you e.g. just add target_link_libraries(target _pywrap_tensorflow python2.7)
.
然后你就可以链接到这个库,让 TF 可用于你的项目(你还必须链接到python2.7
库)!在 CMake 中,您例如只需添加target_link_libraries(target _pywrap_tensorflow python2.7)
.
The C++ header files are located around this library, e.g. in /usr/local/lib/python2.7/dist-packages/tensorflow/include/
.
C++ 头文件位于该库周围,例如在/usr/local/lib/python2.7/dist-packages/tensorflow/include/
.
Once again: this way is officially unsupported and you may run in various issues. The library seems to be statically linked against e.g. protobuf, so you may run in odd link-time or run-time issues. But I am able to load a stored graph, restore the weights and run inference, which is IMO the most wanted functionality in C++.
再说一次:这种方式不受官方支持,您可能会遇到各种问题。该库似乎与 protobuf 等静态链接,因此您可能会遇到奇怪的链接时或运行时问题。但是我能够加载存储的图形,恢复权重并运行推理,这是 IMO 最需要的 C++ 功能。
回答by Rock Zhuang
Tensorflow itself only provides very basic examples about C++ APIs.
Here is a good resource which includes examples of datasets, rnn, lstm, cnn and more
tensorflow c++ examples
Tensorflow 本身仅提供有关 C++ API 的非常基本的示例。
这是一个很好的资源,其中包括数据集、rnn、lstm、cnn 和更多
tensorflow c++ 示例的示例