Python 'conv2d_2/convolution' 由 1 减 3 引起的负尺寸大小
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/45645276/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'
提问by Mohammad Nurdin
I got this error message when declaring the input layer in Keras.
在 Keras 中声明输入层时收到此错误消息。
ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32].
ValueError:由输入形状为 [?,1,28,28], [3,3,28,32] 的 'conv2d_2/convolution'(操作:'Conv2D')从 1 中减去 3 引起的负尺寸大小。
My code is like this
我的代码是这样的
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
Sample application: https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb
示例应用程序:https: //github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb
回答by ml4294
By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is "channels-last". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format = 'channels_first'
when declaring the Convolution2D layer.
默认情况下,Convolution2D ( https://keras.io/layers/convolutional/) 期望输入采用“channels-last”格式(样本、行、列、通道)。您的数据似乎采用格式(样本、通道、行、列)。data_format = 'channels_first'
在声明 Convolution2D 图层时,您应该能够使用 optional 关键字来解决此问题。
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))
回答by charel-f
I had the same problem, however the solution provided in this thread did not help me. In my case it was a different problem that caused this error:
我遇到了同样的问题,但是此线程中提供的解决方案对我没有帮助。在我的情况下,导致此错误的是另一个问题:
Code
代码
imageSize=32
classifier=Sequential()
classifier.add(Conv2D(64, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
Error
错误
The image size is 32 by 32. After the first convolutional layer, we reduced it to 30 by 30. (If I understood convolution correctly)
图像大小为 32 x 32。在第一个卷积层之后,我们将其缩小到 30 x 30。(如果我对卷积的理解正确的话)
Then the pooling layer divides it, so 15 by 15...
然后池化层将它划分,所以 15 x 15 ...
I hope you can see where this is going: In the end, my feature map is so small that my pooling layer (or convolution layer) is too big to go over it- and that causes the error
我希望你能看到这是怎么回事:最后,我的特征图太小了,我的池化层(或卷积层)太大了,无法通过它——这导致了错误
Solution
解决方案
The easy solution to this error is to either make the image size bigger or use less convolutional or pooling layers.
解决此错误的简单方法是增大图像大小或使用较少的卷积或池化层。
回答by Reeves
Keras is available with following backend compatibility:
Keras 具有以下后端兼容性:
TensorFlow : By google, Theano : Developed by LISA lab, CNTK : By Microsoft
TensorFlow:谷歌,Theano:LISA 实验室开发,CNTK:微软
Whenever you see a error with [?,X,X,X], [X,Y,Z,X], its a channel issue to fix this use auto mode of Keras:
每当您看到 [?,X,X,X], [X,Y,Z,X] 的错误时,这是修复 Keras 使用自动模式的通道问题:
Import
进口
from keras import backend as K
K.set_image_dim_ordering('th')
"tf" format means that the convolutional kernels will have the shape (rows, cols, input_depth, depth)
“tf”格式意味着卷积核将具有形状(行、列、输入深度、深度)
This will always work ...
这将始终有效...
回答by Amol Bhatnate
You can instead preserve spatial dimensions of the volume such that the output volume size matches the input volume size, by setting the value to “same”. use padding='same'
您可以通过将值设置为“相同”来保留体积的空间尺寸,以便输出体积大小与输入体积大小匹配。使用 padding='same'
回答by Irina
Use the following:
使用以下内容:
from keras import backend
backend.set_image_data_format('channels_last')
Depending on your preference, you can use 'channels_first'
or 'channels_last'
to set the image data format. (Source)
根据您的喜好,您可以使用'channels_first'
或'channels_last'
来设置图像数据格式。(来源)
If this does not work and your image size is small, try reducing the architecture of your CNN, as previous posters mentioned.
如果这不起作用并且您的图像尺寸很小,请尝试减少 CNN 的架构,如之前的海报所述。
Hope it helps!
希望能帮助到你!