Python Tensorflow 在 C++ 中导出和运行图的不同方式

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/35508866/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 16:31:51  来源:igfitidea点击:

Tensorflow Different ways to Export and Run graph in C++

pythonc++tensorflow

提问by Hamed MP

For importing your trained network to the C++ you need to export your network to be able to do so. After searching a lot and finding almost no information about it, it was clarified that we should use freeze_graph()to be able to do it.

要将经过训练的网络导入 C++,您需要导出网络才能执行此操作。在搜索了很多并且几乎没有找到关于它的信息之后,澄清我们应该使用freeze_graph()才能做到这一点。

Thanks to the new 0.7 version of Tensorflow, they added documentationof it.

感谢 Tensorflow 的新 0.7 版本,他们添加了它的文档

After looking into documentations, I found that there are few similar methods, can you tell what is the difference between freeze_graph()and: tf.train.export_meta_graphas it has similar parameters, but it seems it can also be used for importing models to C++ (I just guess the difference is that for using the file output by this method you can only use import_graph_def()or it's something else?)

查看文档后发现类似的方法很少,你能说一下freeze_graph()和:tf.train.export_meta_graph有什么区别吗: 因为它有相似的参数,但它似乎也可以用于将模型导入C++(我只是猜测区别是对于使用这种方法输出的文件,您只能使用import_graph_def()还是其他东西?)

Also one question about how to use write_graph(): In documentations the graph_defis given by sess.graph_defbut in examples in freeze_graph()it is sess.graph.as_graph_def(). What is the difference between these two?

还有一个关于如何使用的问题write_graph():在文档中 由graph_def给出,sess.graph_def但在示例中freeze_graph()sess.graph.as_graph_def()。这两者有什么区别?

This question is related to this issue.

这个问题与这个问题有关。

Thank you!

谢谢!

回答by Alex Joz

For predicting (and every other operations) you can do something like this:

对于预测(以及所有其他操作),您可以执行以下操作:

First of all in python you should nameyour variables or operation for the future use

首先,在 python 中,您应该命名变量或操作以备将来使用

self.init = tf.initialize_variables(tf.all_variables(), name="nInit")

After training, calculations of so.. when you have your variables assigned go through all of them and save as constants to your graph. (almost the same can be done with that freeze tool, but i usually do it by myself, check "name=nWeights" in py and cpp below)

训练后,当您分配变量时,计算 so.. 将遍历所有变量并将其作为常量保存到您的图表中。(使用该冻结工具几乎可以完成相同的操作,但我通常自己完成,请在下面的 py 和 cpp 中检查“name=nWeights”)

def save(self, filename):
    for variable in tf.trainable_variables():
        tensor = tf.constant(variable.eval())
        tf.assign(variable, tensor, name="nWeights")

    tf.train.write_graph(self.sess.graph_def, 'graph/', 'my_graph.pb', as_text=False)

Now go c++ and load our graph and load variables from saved constants:

现在转到 C++ 并加载我们的图形并从保存的常量中加载变量:

void load(std::string my_model) {
        auto load_graph_status =
                ReadBinaryProto(tensorflow::Env::Default(), my_model, &graph_def);

        auto session_status = session->Create(graph_def);

        std::vector<tensorflow::Tensor> out;
        std::vector<string> vNames;

        int node_count = graph_def.node_size();
        for (int i = 0; i < node_count; i++) {
            auto n = graph_def.node(i);

            if (n.name().find("nWeights") != std::string::npos) {
                vNames.push_back(n.name());
            }
        }

        session->Run({}, vNames, {}, &out);

Now you have all of your neural net weights or other variables loaded.

现在你已经加载了所有的神经网络权重或其他变量。

Similarly, you can perform other operations (remember about names?); make input and output tensors of proper size, fill input tensor with data and run session like so:

同样,您可以执行其他操作(还记得名称吗?);制作适当大小的输入和输出张量,用数据填充输入张量并运行会话,如下所示:

auto operationStatus = session->Run(input, {"put_your_operation_here"}, {}, &out);

回答by Martin Pecka

Here's my solution utilizing the V2 checkpoints introduced in TF 0.12.

这是我使用 TF 0.12 中引入的 V2 检查点的解决方案。

There's no need to convert all variables to constants or freeze the graph.

无需将所有变量转换为常量或冻结图形

Just for clarity, a V2 checkpoint looks like this in my directory models:

为清楚起见,我的目录中的 V2 检查点如下所示models

checkpoint  # some information on the name of the files in the checkpoint
my-model.data-00000-of-00001  # the saved weights
my-model.index  # probably definition of data layout in the previous file
my-model.meta  # protobuf of the graph (nodes and topology info)

Python part (saving)

Python部分(保存)

with tf.Session() as sess:
    tf.train.Saver(tf.trainable_variables()).save(sess, 'models/my-model')

If you create the Saverwith tf.trainable_variables(), you can save yourself some headache and storage space. But maybe some more complicated models need all data to be saved, then remove this argument to Saver, just make sure you're creating the Saverafteryour graph is created. It is also very wise to give all variables/layers unique names, otherwise you can run in different problems.

如果您创建Saverwith tf.trainable_variables(),您可以为自己节省一些头痛和存储空间。但也许一些更复杂的模型需要保存所有数据,然后将此参数删除到Saver,只需确保在创建图形Saver创建 。给所有变量/层赋予唯一的名称也是非常明智的,否则你可能会遇到不同的问题

Python part (inference)

Python部分(推理)

with tf.Session() as sess:
    saver = tf.train.import_meta_graph('models/my-model.meta')
    saver.restore(sess, tf.train.latest_checkpoint('models/'))
    outputTensors = sess.run(outputOps, feed_dict=feedDict)

C++ part (inference)

C++部分(推理)

Note that checkpointPathisn't a path to any of the existing files, just their common prefix. If you mistakenly put there path to the .indexfile, TF won't tell you that was wrong, but it will die during inference due to uninitialized variables.

请注意,这checkpointPath不是任何现有文件的路径,只是它们的公共前缀。如果您错误地将.index文件路径放在那里,TF 不会告诉您这是错误的,但是由于未初始化的变量,它会在推理过程中死亡。

#include <tensorflow/core/public/session.h>
#include <tensorflow/core/protobuf/meta_graph.pb.h>

using namespace std;
using namespace tensorflow;

...
// set up your input paths
const string pathToGraph = "models/my-model.meta"
const string checkpointPath = "models/my-model";
...

auto session = NewSession(SessionOptions());
if (session == nullptr) {
    throw runtime_error("Could not create Tensorflow session.");
}

Status status;

// Read in the protobuf graph we exported
MetaGraphDef graph_def;
status = ReadBinaryProto(Env::Default(), pathToGraph, &graph_def);
if (!status.ok()) {
    throw runtime_error("Error reading graph definition from " + pathToGraph + ": " + status.ToString());
}

// Add the graph to the session
status = session->Create(graph_def.graph_def());
if (!status.ok()) {
    throw runtime_error("Error creating graph: " + status.ToString());
}

// Read weights from the saved checkpoint
Tensor checkpointPathTensor(DT_STRING, TensorShape());
checkpointPathTensor.scalar<std::string>()() = checkpointPath;
status = session->Run(
        {{ graph_def.saver_def().filename_tensor_name(), checkpointPathTensor },},
        {},
        {graph_def.saver_def().restore_op_name()},
        nullptr);
if (!status.ok()) {
    throw runtime_error("Error loading checkpoint from " + checkpointPath + ": " + status.ToString());
}

// and run the inference to your liking
auto feedDict = ...
auto outputOps = ...
std::vector<tensorflow::Tensor> outputTensors;
status = session->Run(feedDict, outputOps, {}, &outputTensors);