Python Tensorflow ValueError:没有要保存的变量
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/38626435/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Tensorflow ValueError: No variables to save from
提问by Zan Huang
I have written a tensorflow CNN and it is already trained. I wish to restore it to run it on a few samples but unfortunately its spitting out:
我已经写了一个 tensorflow CNN,它已经被训练过了。我希望恢复它以在几个样本上运行它,但不幸的是它吐出来了:
ValueError: No variables to save
ValueError:没有要保存的变量
My eval code can be found here:
我的评估代码可以在这里找到:
import tensorflow as tf
import main
import Process
import Input
eval_dir = "/Users/Zanhuang/Desktop/NNP/model.ckpt-30"
checkpoint_dir = "/Users/Zanhuang/Desktop/NNP/checkpoint"
init_op = tf.initialize_all_variables()
saver = tf.train.Saver()
def evaluate():
with tf.Graph().as_default() as g:
sess.run(init_op)
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
saver.restore(sess, eval_dir)
images, labels = Process.eval_inputs(eval_data = eval_data)
forward_propgation_results = Process.forward_propagation(images)
top_k_op = tf.nn.in_top_k(forward_propgation_results, labels, 1)
print(top_k_op)
def main(argv=None):
evaluate()
if __name__ == '__main__':
tf.app.run()
回答by mrry
The tf.train.Saver
must be created afterthe variables that you want to restore (or save). Additionally it must be created in the same graph as those variables.
在tf.train.Saver
必须创建后要恢复(或保存)的变量。此外,它必须与这些变量在同一图中创建。
Assuming that Process.forward_propagation(…)
also creates the variables in your model, adding the saver creation after this line should work:
假设Process.forward_propagation(…)
还在您的模型中创建变量,在此行之后添加保护程序创建应该可以工作:
forward_propgation_results = Process.forward_propagation(images)
In addition, you must pass the new tf.Graph
that you created to the tf.Session
constructor so you'll need to move the creation of sess
inside that with
block as well.
此外,您必须将tf.Graph
创建的 new 传递给tf.Session
构造函数,因此您还需要sess
在该with
块内移动创建。
The resulting function will be something like:
结果函数将类似于:
def evaluate():
with tf.Graph().as_default() as g:
images, labels = Process.eval_inputs(eval_data = eval_data)
forward_propgation_results = Process.forward_propagation(images)
init_op = tf.initialize_all_variables()
saver = tf.train.Saver()
top_k_op = tf.nn.in_top_k(forward_propgation_results, labels, 1)
with tf.Session(graph=g) as sess:
sess.run(init_op)
saver.restore(sess, eval_dir)
print(sess.run(top_k_op))
回答by Ahmad Asghariyan Rezayi
Simply, there should be at least one tf.variable that is defined before you create your saver object.
简单地说,在您创建保护程序对象之前,应该至少定义一个 tf.variable 。
You can get the above code running by adding the following line of code before the saver object definition.
您可以通过在保护程序对象定义之前添加以下代码行来运行上述代码。
The code that you need to add has come between the two ###.
您需要添加的代码介于两个 ### 之间。
import tensorflow as tf
import main
import Process
import Input
eval_dir = "/Users/Zanhuang/Desktop/NNP/model.ckpt-30"
checkpoint_dir = "/Users/Zanhuang/Desktop/NNP/checkpoint"
init_op = tf.initialize_all_variables()
### Here Comes the fake variable that makes defining a saver object possible.
_ = tf.Variable(initial_value='fake_variable')
###
saver = tf.train.Saver()
...
回答by P-Gn
Note that since TF 0.11 —?a long time ago yet after the currently accepted answer?— tf.train.Saver
gained a defer_build
argument in its constructorthat allows you to define variables afterit has been constructed. However you now need to call its build
member function when all variables have been added, typically just before finilize
ing your graph.
请注意,从 TF 0.11 开始——?很久以前但在当前接受的答案之后?——在其构造函数中tf.train.Saver
获得了一个defer_build
参数,允许您在构造它之后定义变量。但是,您现在需要build
在添加所有变量后调用其成员函数,通常是在添加finilize
图形之前。
saver = tf.train.Saver(defer_build=True)
# build you graph here
saver.build()
graph.finalize()
# now entering training loop