Python ValueError:输入 0 与层 lstm_13 不兼容:预期 ndim=3,发现 ndim=4

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/44583254/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-20 00:10:35  来源:igfitidea点击:

ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4

pythonkeraslstmrecurrent-neural-network

提问by Urja Pawar

I am trying for multi-class classification and here are the details of my training input and output:

我正在尝试多类分类,这里是我的训练输入和输出的详细信息:

train_input.shape= (1, 95000, 360) (95000 length input array with each element being an array of 360 length)

train_output.shape = (1, 95000, 22) (22 Classes are there)

train_input.shape= (1, 95000, 360) (95000 长度的输入数组,每个元素都是 360 长度的数组)

train_output.shape = (1, 95000, 22) (有 22 个类)

model = Sequential()

model.add(LSTM(22, input_shape=(1, 95000,360)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(train_input, train_output, epochs=2, batch_size=500)

The error is:

错误是:

ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4 in line: model.add(LSTM(22, input_shape=(1, 95000,360)))

ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4 in line: model.add(LSTM(22, input_shape=(1, 95000,360)))

Please help me out, I am not able to solve it through other answers.

请帮帮我,我无法通过其他答案解决。

采纳答案by Urja Pawar

I solved the problem by making

我通过制作解决了这个问题

input size: (95000,360,1) and output size: (95000,22)

输入大小:(95000,360,1)和输出大小:(95000,22)

and changed the input shape to (360,1)in the code where model is defined:

并在定义模型的代码中将输入形状更改为 (360,1)

model = Sequential()
model.add(LSTM(22, input_shape=(360,1)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)

回答by Michele Tonutti

input_shape is supposed to be (timesteps, n_features). Remove the first dimension.

input_shape 应该是(时间步长,n_features)。删除第一个维度。

input_shape = (95000,360)

Same for the output.

输出也一样。

回答by Shobhit Srivastava

Well, I think the main problem out there is with the return_sequencesparameter in the network.This hyper parameter should be set to Falsefor the last layerand truefor the other previous layers.

嗯,我认为主要的问题是return_sequences网络中的参数。应该False最后一层true其他前一层设置这个超参数。