Python FailedPreconditionError:试图在 Tensorflow 中使用未初始化
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/34001922/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
FailedPreconditionError: Attempting to use uninitialized in Tensorflow
提问by user3654387
I am working through the TensorFlow tutorial, which uses a "weird" format to upload the data. I would like to use the NumPy or pandas format for the data, so that I can compare it with scikit-learn results.
我正在学习TensorFlow 教程,它使用“奇怪”的格式上传数据。我想对数据使用 NumPy 或 Pandas 格式,以便我可以将其与 scikit-learn 结果进行比较。
I get the digit recognition data from Kaggle: https://www.kaggle.com/c/digit-recognizer/data.
我从 Kaggle 获取数字识别数据:https://www.kaggle.com/c/digit-recognizer/data 。
Here the code from the TensorFlow tutorial (which works fine):
这里是 TensorFlow 教程中的代码(工作正常):
# Stuff from tensorflow tutorial
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
Here I read the data, strip out the target variables and split the data into testing and training datasets (this all works fine):
在这里,我读取数据,去除目标变量并将数据拆分为测试和训练数据集(这一切正常):
# Read dataframe from training data
csvfile='train.csv'
from pandas import DataFrame, read_csv
df = read_csv(csvfile)
# Strip off the target data and make it a separate dataframe.
Target = df.label
del df["label"]
# Split data into training and testing sets
msk = np.random.rand(len(df)) < 0.8
dfTest = df[~msk]
TargetTest = Target[~msk]
df = df[msk]
Target = Target[msk]
# One hot encode the target
OHTarget=pd.get_dummies(Target)
OHTargetTest=pd.get_dummies(TargetTest)
Now, when I try to run the training step, I get a FailedPreconditionError
:
现在,当我尝试运行训练步骤时,我得到FailedPreconditionError
:
for i in range(100):
batch = np.array(df[i*50:i*50+50].values)
batch = np.multiply(batch, 1.0 / 255.0)
Target_batch = np.array(OHTarget[i*50:i*50+50].values)
Target_batch = np.multiply(Target_batch, 1.0 / 255.0)
train_step.run(feed_dict={x: batch, y_: Target_batch})
Here's the full error:
这是完整的错误:
---------------------------------------------------------------------------
FailedPreconditionError Traceback (most recent call last)
<ipython-input-82-967faab7d494> in <module>()
4 Target_batch = np.array(OHTarget[i*50:i*50+50].values)
5 Target_batch = np.multiply(Target_batch, 1.0 / 255.0)
----> 6 train_step.run(feed_dict={x: batch, y_: Target_batch})
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in run(self, feed_dict, session)
1265 none, the default session will be used.
1266 """
-> 1267 _run_using_default_session(self, feed_dict, self.graph, session)
1268
1269
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _run_using_default_session(operation, feed_dict, graph, session)
2761 "the operation's graph is different from the session's "
2762 "graph.")
-> 2763 session.run(operation, feed_dict)
2764
2765
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict)
343
344 # Run request and get response.
--> 345 results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
346
347 # User may have fetched the same tensor multiple times, but we
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_run(self, target_list, fetch_list, feed_dict)
417 # pylint: disable=protected-access
418 raise errors._make_specific_exception(node_def, op, e.error_message,
--> 419 e.code)
420 # pylint: enable=protected-access
421 raise e_type, e_value, e_traceback
FailedPreconditionError: Attempting to use uninitialized value Variable_1
[[Node: gradients/add_grad/Shape_1 = Shape[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_1)]]
Caused by op u'gradients/add_grad/Shape_1', defined at:
File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
...........
...which was originally created as op u'add', defined at:
File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
[elided 17 identical lines from previous traceback]
File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-45-59183d86e462>", line 1, in <module>
y = tf.nn.softmax(tf.matmul(x,W) + b)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 403, in binary_op_wrapper
return func(x, y, name=name)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 44, in add
return _op_def_lib.apply_op("Add", x=x, y=y, name=name)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
op_def=op_def)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
self._traceback = _extract_stack()
Any ideas as to how I can fix this?
关于如何解决这个问题的任何想法?
回答by mrry
The FailedPreconditionError
arises because the program is attempting to read a variable (named "Variable_1"
) before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their "initializer" operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop:
之所以FailedPreconditionError
会出现这种情况,是因为程序"Variable_1"
在初始化之前试图读取一个变量(名为)。在 TensorFlow 中,所有变量都必须通过运行它们的“初始化器”操作来显式初始化。为方便起见,您可以通过在训练循环之前执行以下语句来运行当前会话中的所有变量初始值设定项:
tf.initialize_all_variables().run()
Note that this answer assumes that, as in the question, you are using tf.InteractiveSession
, which allows you to run operations without specifying a session. For non-interactive uses, it is more common to use tf.Session
, and initialize as follows:
请注意,此答案假定,与问题一样,您正在使用tf.InteractiveSession
,这允许您在不指定会话的情况下运行操作。对于非交互式使用,更常见的是使用tf.Session
,并初始化如下:
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
回答by user3144836
tf.initialize_all_variables()
is deprecated. Instead initialize tensorflow variables with:
tf.initialize_all_variables()
已弃用。而是使用以下方法初始化 tensorflow 变量:
tf.global_variables_initializer()
A common example usage is:
一个常见的示例用法是:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
回答by Salvador Dali
From official documentation, FailedPreconditionError
来自官方文档,FailedPreconditionError
This exception is most commonly raised when running an operation that reads a tf.Variable before it has been initialized.
当运行在初始化之前读取 tf.Variable 的操作时,最常引发此异常。
In your case the error even explains what variable was not initialized: Attempting to use uninitialized value Variable_1
. One of the TF tutorials explains a lot about variables, their creation/initialization/saving/loading
在您的情况下,错误甚至解释了未初始化的变量:Attempting to use uninitialized value Variable_1
。TF 教程之一解释了很多关于变量,它们的创建/初始化/保存/加载
Basically to initialize the variable you have 3 options:
基本上要初始化变量,您有 3 个选项:
- initialize all global variables with
tf.global_variables_initializer()
- initialize variables you care about with
tf.variables_initializer(list_of_vars)
. Notice that you can use this function to mimic global_variable_initializer:tf.variable_initializers(tf.global_variables())
- initialize only one variable with
var_name.initializer
- 初始化所有全局变量
tf.global_variables_initializer()
- 初始化你关心的变量
tf.variables_initializer(list_of_vars)
。请注意,您可以使用此函数来模拟 global_variable_initializer:tf.variable_initializers(tf.global_variables())
- 只初始化一个变量
var_name.initializer
I almost always use the first approach. Remember you should put it inside a session run. So you will get something like this:
我几乎总是使用第一种方法。请记住,您应该将它放在会话运行中。所以你会得到这样的东西:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
If your are curious about more information about variables, read this documentationto know how to report_uninitialized_variables
and check is_variable_initialized
.
如果您对有关变量的更多信息感到好奇,请阅读此文档以了解如何report_uninitialized_variables
检查is_variable_initialized
.
回答by Tal
I got this error message from a completely different case. It seemed that the exception handler in tensorflow raised it. You can check each row in the Traceback. In my case, it happened in tensorflow/python/lib/io/file_io.py
, because this file contained a different bug, where self.__mode
and self.__name
weren't initialized, and it needed to call self._FileIO__mode
, and self_FileIO__name
instead.
我从一个完全不同的案例中得到了这个错误信息。似乎是 tensorflow 中的异常处理程序引发了它。您可以检查回溯中的每一行。在我的例子中,它发生在 中tensorflow/python/lib/io/file_io.py
,因为这个文件包含一个不同的错误,其中self.__mode
和self.__name
没有被初始化,它需要调用self._FileIO__mode
, 而self_FileIO__name
不是。
回答by layser
Different use case, but set your session as the default session did the trick for me:
不同的用例,但将您的会话设置为默认会话对我来说很有效:
with sess.as_default():
result = compute_fn([seed_input,1])
This is one of these mistakes that is so obvious, once you have solved it.
一旦你解决了它,这是非常明显的错误之一。
My use-case is the following:
1) store keras VGG16 as tensorflow graph
2) load kers VGG16 as a graph
3) run tf function on the graph and get:
我的用例如下:
1) 将 keras VGG16 存储为张量流图
2) 将 kers VGG16 加载为图
3) 在图上运行 tf 函数并得到:
FailedPreconditionError: Attempting to use uninitialized value block1_conv2/bias
[[Node: block1_conv2/bias/read = Identity[T=DT_FLOAT, _class=["loc:@block1_conv2/bias"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](block1_conv2/bias)]]
[[Node: predictions/Softmax/_7 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_168_predictions/Softmax", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
回答by Eric Antoine Scuccimarra
When I had this issue with tf.train.string_input_producer()
and tf.train.batch()
initializing the local variables before I started the Coordinator solved the problem. I had been getting the error when I initialized the local variables after starting the Coordinator.
当我在启动协调器之前遇到这个问题tf.train.string_input_producer()
并tf.train.batch()
初始化局部变量时,解决了这个问题。在启动协调器后初始化局部变量时,我一直收到错误消息。
回答by LaSul
The FailedPreconditionErrorcomes because the session is trying to read a variable that hasn"t been initialized.
该FailedPreconditionError来,因为会话尝试读取一个变量,不是招“吨被初始化。
As of Tensorflowversion 1.11.0, you need to take this :
从Tensorflow 1.11.0版开始,您需要采取以下措施:
init_op = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init_op)
回答by prosti
You have to initialize variables before using them.?
你必须在使用它们之前初始化变量。?
If you try to evaluate the variables before initializing them you'll run into:
FailedPreconditionError: Attempting to use uninitialized value tensor.
如果您尝试在初始化之前评估变量,您将遇到:
FailedPreconditionError: Attempting to use uninitialized value tensor.
The easiest way is initializing all variables at once using: tf.global_variables_initializer()
最简单的方法是使用以下方法一次初始化所有变量: tf.global_variables_initializer()
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
You use sess.run(init)
to run the initializer, without fetching any value.
您用于sess.run(init)
运行初始化程序,而不获取任何值。
To initialize only a subset of variables, you use tf.variables_initializer()
listing the variables:
要仅初始化变量的子集,请使用tf.variables_initializer()
列出变量:
var_ab = tf.variables_initializer([a, b], name="a_and_b")
with tf.Session() as sess:
sess.run(var_ab)
You can also initialize each variable separately using tf.Variable.initializer
您还可以使用单独初始化每个变量 tf.Variable.initializer
# create variable W as 784 x 10 tensor, filled with zeros
W = tf.Variable(tf.zeros([784,10])) with tf.Session() as sess:
sess.run(W.initializer)
回答by wordsforthewise
Possibly something has changed in recent TensorFlow builds, because for me, running
在最近的 TensorFlow 构建中可能发生了一些变化,因为对我来说,运行
sess = tf.Session()
sess.run(tf.local_variables_initializer())
before fitting any models seems to do the trick. Most older examples and comments seem to suggest tf.global_variables_initializer()
.
在拟合任何模型之前似乎可以解决问题。大多数较旧的示例和评论似乎都表明tf.global_variables_initializer()
.
回答by Tensorflow Support
Tensorflow 2.0 Compatible Answer: In Tensorflow Version >= 2.0, the command for Initializing all the Variables if we use Graph Mode, to fix the FailedPreconditionError
is shown below:
Tensorflow 2.0 兼容答案:在 Tensorflow 版本 >= 2.0 中,如果我们使用图形模式,则用于初始化所有变量的命令FailedPreconditionError
如下所示:
tf.compat.v1.global_variables_initializer
This is just a shortcut for variables_initializer(global_variables())
这只是一个捷径 variables_initializer(global_variables())
It returns an Op that initializes global variables in the graph.
它返回一个初始化图中全局变量的 Op。