Python Tensorflow TypeError: Fetch argument None has invalid type <type 'NoneType'>?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/39114832/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 21:53:22  来源:igfitidea点击:

Tensorflow TypeError: Fetch argument None has invalid type <type 'NoneType'>?

pythonartificial-intelligencetensorflowtypeerrorrecurrent-neural-network

提问by agupta231

I'm building a RNN loosely based on the TensorFlow tutorial.

我正在根据TensorFlow 教程松散地构建 RNN 。

The relevant parts of my model are as follows:

我的模型的相关部分如下:

input_sequence = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, PIXEL_COUNT + AUX_INPUTS])
output_actual = tf.placeholder(tf.float32, [BATCH_SIZE, OUTPUT_SIZE])

lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(CELL_SIZE, state_is_tuple=False)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * CELL_LAYERS, state_is_tuple=False)

initial_state = state = stacked_lstm.zero_state(BATCH_SIZE, tf.float32)
outputs = []

with tf.variable_scope("LSTM"):
    for step in xrange(TIME_STEPS):
        if step > 0:
            tf.get_variable_scope().reuse_variables()
        cell_output, state = stacked_lstm(input_sequence[:, step, :], state)
        outputs.append(cell_output)

final_state = state

And the feeding:

和喂养:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(output_actual * tf.log(prediction), reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_actual, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())
    numpy_state = initial_state.eval()

    for i in xrange(1, ITERATIONS):
        batch = DI.next_batch()

        print i, type(batch[0]), np.array(batch[1]).shape, numpy_state.shape

        if i % LOG_STEP == 0:
            train_accuracy = accuracy.eval(feed_dict={
                initial_state: numpy_state,
                input_sequence: batch[0],
                output_actual: batch[1]
            })

            print "Iteration " + str(i) + " Training Accuracy " + str(train_accuracy)

        numpy_state, train_step = sess.run([final_state, train_step], feed_dict={
            initial_state: numpy_state,
            input_sequence: batch[0],
            output_actual: batch[1]
            })

When I run this, I get the following error:

当我运行它时,我收到以下错误:

Traceback (most recent call last):
  File "/home/agupta/Documents/Projects/Image-Recognition-with-LSTM/RNN/feature_tracking/model.py", line 109, in <module>
    output_actual: batch[1]
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 698, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 838, in _run
    fetch_handler = _FetchHandler(self._graph, fetches)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 355, in __init__
    self._fetch_mapper = _FetchMapper.for_fetch(fetches)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 181, in for_fetch
    return _ListFetchMapper(fetch)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 288, in __init__
    self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 178, in for_fetch
    (fetch, type(fetch)))
TypeError: Fetch argument None has invalid type <type 'NoneType'>

Perhaps the weirdest part is that this error gets thrown the seconditeration, and the first works completely fine. I'm ripping my hair trying to fix this, so any help would be greatly appreciated.

也许最奇怪的部分是这个错误在第二次迭代时被抛出,而第一次完全正常。我正在扯头发试图解决这个问题,所以任何帮助将不胜感激。

回答by mrry

You are re-assigning the train_stepvariable to the second element of the result of sess.run()(which happens to be None). Hence, on the second iteration, train_stepis None, which leads to the error.

您正在将train_step变量重新分配给sess.run()( 恰好是None)结果的第二个元素。因此,在第二次迭代中,train_stepNone,这会导致错误。

The fix is fortunately simple:

幸运的是,修复很简单:

for i in xrange(1, ITERATIONS):

    # ...

    # Discard the second element of the result.
    numpy_state, _ = sess.run([final_state, train_step], feed_dict={
        initial_state: numpy_state,
        input_sequence: batch[0],
        output_actual: batch[1]
        })

回答by Peter Mitrano

Another common reason to get this error is if you include the summary fetch operation but have not written any summaries.

出现此错误的另一个常见原因是,如果您包含摘要提取操作但尚未写入任何摘要。

Example:

例子:

# tf.summary.scalar("loss", loss) # <- uncomment this line and it will work fine
summary_op = tf.summary.merge_all()
sess = tf.Session()
# ...
summary = sess.run([summary_op, ...], feed_dict={...}) # TypeError, summary_op is "None"!

What's extra confusing is that summary_opis not itself None, that's just the error that bubbles up from inside the session's run method.

更令人困惑的是,summary_op它本身并不是 None,而是从会话的 run 方法内部冒出的错误。