Python 如何在 Keras 中获取图层的输出形状?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/49527159/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 19:06:57  来源:igfitidea点击:

How to get the output shape of a layer in Keras?

pythonkeraslstmrecurrent-neural-network

提问by Maryam Rahmani Moghaddam

I have the following code in Keras (Basically I am modifying this code for my use) and I get this error:

我在 Keras 中有以下代码(基本上我正在修改此代码以供我使用)并且出现此错误:

'ValueError: Error when checking target: expected conv3d_3 to have 5 dimensions, but got array with shape (10, 4096)'

'ValueError: 检查目标时出错:预期 conv3d_3 有 5 个维度,但得到形状为 (10, 4096) 的数组'

Code:

代码:

from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers

# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.

model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   input_shape=(None, 64, 64, 1),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
               activation='sigmoid',
               padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')

the data I feed is in the following format: [1, 10, 64, 64, 1]. So I would like to know where I am wrong and also how to see the output_shape of each layer.

我提供的数据格式如下:[1, 10, 64, 64, 1]。所以我想知道我错在哪里以及如何查看每一层的output_shape。

回答by umutto

You can get the output shape of a layer by layer.output_shape.

您可以通过 获得图层的输出形状layer.output_shape

for layer in model.layers:
    print(layer.output_shape)

Gives you:

给你:

(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 1)

Alternatively you can pretty print the model using model.summary:

或者,您可以使用model.summary以下方法漂亮地打印模型:

model.summary()

Gives you the details about the number of parameters and output shapes of each layer and an overall model structure in a pretty format:

以漂亮的格式为您提供有关每层的参数数量和输出形状以及整体模型结构的详细信息:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 64, 64, 40)  59200     
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv3d_1 (Conv3D)            (None, None, 64, 64, 1)   1081      
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
_________________________________________________________________

If you want to access information about a specific layer only, you can use nameargument when constructing that layer and then call like this:

如果您只想访问有关特定层的信息,则可以name在构造该层时使用参数,然后像这样调用:

...
model.add(ConvLSTM2D(..., name='conv3d_0'))
...

model.get_layer('conv3d_0')


EDIT:For reference sake it will always be same as layer.output_shapeand please don't actually use Lambda or custom layers for this. But you can use Lambdalayer to echo the shape of a passing tensor.

编辑:为了参考起见,它始终与此相同layer.output_shape,请不要为此实际使用 Lambda 或自定义层。但是您可以使用Lambdalayer 来回显传递张量的形状。

...
def print_tensor_shape(x):
    print(x.shape)
    return x
model.add(Lambda(print_tensor_shape))
...

Or write a custom layer and print the shape of the tensor on call().

或者编写一个自定义层并在 上打印张量的形状call()

class echo_layer(Layer):
...
    def call(self, x):
        print(x.shape)
        return x
...

model.add(echo_layer())