Python TensorFlow ValueError:无法为形状为“(?, 64, 64, 3)”的张量 u'Placeholder:0' 提供形状 (64, 64, 3) 的值
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/40430186/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
TensorFlow ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)'
提问by Pragyan93
I am new to TensorFlow and machine learning. I am trying to classify two objects a cup and a pendrive (jpeg images). I have trained and exported a model.ckpt successfully. Now I am trying to restore the saved model.ckpt for prediction. Here is the script:
我是 TensorFlow 和机器学习的新手。我正在尝试对杯子和随身碟(jpeg 图像)两个对象进行分类。我已经成功训练并导出了一个 model.ckpt。现在我正在尝试恢复保存的 model.ckpt 以进行预测。这是脚本:
import tensorflow as tf
import math
import numpy as np
from PIL import Image
from numpy import array
# image parameters
IMAGE_SIZE = 64
IMAGE_CHANNELS = 3
NUM_CLASSES = 2
def main():
image = np.zeros((64, 64, 3))
img = Image.open('./IMG_0849.JPG')
img = img.resize((64, 64))
image = array(img).reshape(64,64,3)
k = int(math.ceil(IMAGE_SIZE / 2.0 / 2.0 / 2.0 / 2.0))
# Store weights for our convolution and fully-connected layers
with tf.name_scope('weights'):
weights = {
# 5x5 conv, 3 input channel, 32 outputs each
'wc1': tf.Variable(tf.random_normal([5, 5, 1 * IMAGE_CHANNELS, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
# 5x5 conv, 64 inputs, 128 outputs
'wc3': tf.Variable(tf.random_normal([5, 5, 64, 128])),
# 5x5 conv, 128 inputs, 256 outputs
'wc4': tf.Variable(tf.random_normal([5, 5, 128, 256])),
# fully connected, k * k * 256 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([k * k * 256, 1024])),
# 1024 inputs, 2 class labels (prediction)
'out': tf.Variable(tf.random_normal([1024, NUM_CLASSES]))
}
# Store biases for our convolution and fully-connected layers
with tf.name_scope('biases'):
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bc3': tf.Variable(tf.random_normal([128])),
'bc4': tf.Variable(tf.random_normal([256])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([NUM_CLASSES]))
}
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, "./model.ckpt")
print "...Model Loaded..."
x_ = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE , IMAGE_SIZE , IMAGE_CHANNELS])
y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES])
keep_prob = tf.placeholder(tf.float32)
init = tf.initialize_all_variables()
sess.run(init)
my_classification = sess.run(tf.argmax(y_, 1), feed_dict={x_:image})
print 'Neural Network predicted', my_classification[0], "for your image"
if __name__ == '__main__':
main()
When I run the above script for prediction I get the following error:
当我运行上述脚本进行预测时,出现以下错误:
ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)'
What am I doing wrong? And how do I fix the shape of numpy array?
我究竟做错了什么?以及如何修复 numpy 数组的形状?
采纳答案by nessuno
image
has a shape of (64,64,3)
.
image
具有 的形状(64,64,3)
。
Your input placeholder _x
have a shape of (?, 64,64,3)
.
您的输入占位符_x
的形状为(?, 64,64,3)
.
The problem is that you're feeding the placeholder with a value of a different shape.
问题是您正在为占位符提供不同形状的值。
You have to feed it with a value of (1, 64, 64, 3)
= a batch of 1 image.
您必须为其提供值(1, 64, 64, 3)
= 一批 1 张图像。
Just reshape your image
value to a batch with size one.
只需将您的image
价值重塑为大小为 1 的批次。
image = array(img).reshape(1, 64,64,3)
P.S: the fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel.
You can try to read more than 1 image (N images) and than build a batch of N image, using a tensor with shape (N, 64,64,3)
PS:输入占位符接受一批图像这一事实意味着您可以并行运行一批图像的预测。您可以尝试读取 1 张以上的图像(N 张图像),然后使用具有形状的张量构建一批 N 张图像(N, 64,64,3)
回答by rocksyne
Powder'scomment may go undetected like I missed it so many times,. So with the hope of making it more visible, I will re-iterate his point.
粉的评论可能不会被发现,就像我错过了很多次一样。因此,为了让它更显眼,我将重申他的观点。
Sometimes using image = array(img).reshape(a,b,c,d)
will reshape alright but from experience, my kernel crashes every time I try to use the new dimension in an operation. The safest to use is
有时使用image = array(img).reshape(a,b,c,d)
会重塑,但根据经验,每次尝试在操作中使用新维度时,我的内核都会崩溃。最安全的使用方法是
np.expand_dims(img, axis=0)
np.expand_dims(img,axis=0)
It works perfect every time. I just can't explain why. This linkhas a great explanation and examples regarding its usage.
它每次都完美无缺。我只是无法解释为什么。这个链接有一个很好的解释和关于它的用法的例子。