Python 用于 conv2d 和手动加载图像的 Keras input_shape
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/43895750/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Keras input_shape for conv2d and manually loaded images
提问by Stormsson
I am manually creating my dataset from a number of 384x286 b/w images.
我正在从许多 384x286 黑白图像手动创建我的数据集。
I load an image like this:
我加载这样的图像:
x = []
for f in files:
img = Image.open(f)
img.load()
data = np.asarray(img, dtype="int32")
x.append(data)
x = np.array(x)
this results in x being an array (num_samples, 286, 384)
这导致 x 是一个数组 (num_samples, 286, 384)
print(x.shape) => (100, 286, 384)
reading the keras documentation, and checking my backend, i should provide to the convolution step an input_shape composed by ( rows, cols, channels )
阅读 keras 文档并检查我的后端,我应该向卷积步骤提供由(行、列、通道)组成的 input_shape
since i don't arbitrarily know the sample size, i would have expected to pass as an input size, something similar to
因为我不知道样本大小,所以我希望作为输入大小传递,类似于
( None, 286, 384, 1 )
the model is built as follows:
该模型构建如下:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
# other steps...
passing as input_shape (286, 384, 1) causes:
作为 input_shape (286, 384, 1) 传递会导致:
Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (85, 286, 384)
检查输入时出错:预期 conv2d_1_input 有 4 个维度,但得到了形状为 (85, 286, 384) 的数组
passing as_input_shape (None, 286, 384, 1 ) causes:
传递 as_input_shape (None, 286, 384, 1 ) 导致:
Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
输入 0 与层 conv2d_1 不兼容:预期 ndim=4,发现 ndim=5
what am i doing wrong ? how do i have to reshape the input array?
我究竟做错了什么 ?我该如何重塑输入数组?
回答by Wilmar van Ommeren
Set the input_shape
to (286,384,1). Now the model expects an input with 4 dimensions. This means that you have to reshape your image with .reshape(n_images, 286, 384, 1)
. Now you have added an extra dimension without changing the data and your model is ready to run. Basically, you need to reshape your data to (n_images
, x_shape
, y_shape
, channels
).
将 设置input_shape
为 (286,384,1)。现在模型需要 4 个维度的输入。这意味着您必须使用.reshape(n_images, 286, 384, 1)
. 现在,您已在不更改数据的情况下添加了额外的维度,并且您的模型已准备好运行。基本上,您需要将数据重塑为 ( n_images
, x_shape
, y_shape
, channels
)。
The cool thing is that you also can use an RGB-image as input. Just change channels
to 3.
很酷的是,您还可以使用 RGB 图像作为输入。只需更改channels
为3。
Check also this answer: Keras input explanation: input_shape, units, batch_size, dim, etc
另请检查此答案: Keras input 解释:input_shape、units、batch_size、dim 等
Example
例子
import numpy as np
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.core import Flatten, Dense, Activation
from keras.utils import np_utils
#Create model
model = Sequential()
model.add(Convolution2D(32, kernel_size=(3, 3), activation='relu', input_shape=(286,384,1)))
model.add(Flatten())
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
#Create random data
n_images=100
data = np.random.randint(0,2,n_images*286*384)
labels = np.random.randint(0,2,n_images)
labels = np_utils.to_categorical(list(labels))
#add dimension to images
data = data.reshape(n_images,286,384,1)
#Fit model
model.fit(data, labels, verbose=1)
回答by thefifthHyman005
your input_shape dimension is correct i.e input_shape(286, 384, 1)
您的 input_shape 维度是正确的,即 input_shape(286, 384, 1)
reshape your input_image to 4D [batch_size, img_height, img_width, number_of_channels]
将 input_image 重塑为 4D [batch_size, img_height, img_width, number_of_channels]
input_image=input_image.reshape(85,286, 384,1)
during
期间
model.fit(input_image,label)
回答by Harsha Pokkalla
I think following might resolve your error.
我认为以下可能会解决您的错误。
input_shape we provide to first conv2d (first layer of sequential model) should be something like (286,384,1) or (width,height,channels). No need of "None" dimension for batch_size in it.
Shape of your input can be (batch_size,286,384,1)
我们提供给第一个 conv2d(顺序模型的第一层)的 input_shape 应该类似于 (286,384,1) 或 (width,height,channels)。其中的 batch_size 不需要“无”维度。
您输入的形状可以是 (batch_size,286,384,1)
Does this help you ??
这对你有帮助吗??