Python 加载权重后如何在keras中添加和删除新层?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/41668813/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to add and remove new layers in keras after loading weights?
提问by Eka
I am trying to do a transfer learning; for that purpose I want to remove the last two layers of the neural network and add another two layers. This is an example code which also output the same error.
我正在尝试进行迁移学习;为此,我想删除神经网络的最后两层并添加另外两层。这是一个示例代码,它也输出相同的错误。
from keras.models import Sequential
from keras.layers import Input,Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dropout, Activation
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model
in_img = Input(shape=(3, 32, 32))
x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img)
x = Activation('relu', name='relu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x)
x = Activation('relu', name='relu_conv2')(x)
x = GlobalAveragePooling2D()(x)
o = Activation('softmax', name='loss')(x)
model = Model(input=in_img, output=[o])
model.compile(loss="categorical_crossentropy", optimizer="adam")
#model.load_weights('model_weights.h5', by_name=True)
model.summary()
model.layers.pop()
model.layers.pop()
model.summary()
model.add(MaxPooling2D())
model.add(Activation('sigmoid', name='loss'))
I removed the layer using pop()
but when I tried to add its outputting this error
我删除了层使用pop()
但是当我尝试添加它时输出此错误
AttributeError: 'Model' object has no attribute 'add'
AttributeError: 'Model' 对象没有属性 'add'
I know the most probable reason for the error is improper use of model.add()
. what other syntax should I use?
我知道错误最可能的原因是model.add()
. 我应该使用什么其他语法?
EDIT:
编辑:
I tried to remove/add layers in keras but its not allowing it to be added after loading external weights.
我尝试在 keras 中删除/添加层,但在加载外部权重后不允许添加它。
from keras.models import Sequential
from keras.layers import Input,Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dropout, Activation
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model
in_img = Input(shape=(3, 32, 32))
def gen_model():
in_img = Input(shape=(3, 32, 32))
x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img)
x = Activation('relu', name='relu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x)
x = Activation('relu', name='relu_conv2')(x)
x = GlobalAveragePooling2D()(x)
o = Activation('softmax', name='loss')(x)
model = Model(input=in_img, output=[o])
return model
#parent model
model=gen_model()
model.compile(loss="categorical_crossentropy", optimizer="adam")
model.summary()
#saving model weights
model.save('model_weights.h5')
#loading weights to second model
model2=gen_model()
model2.compile(loss="categorical_crossentropy", optimizer="adam")
model2.load_weights('model_weights.h5', by_name=True)
model2.layers.pop()
model2.layers.pop()
model2.summary()
#editing layers in the second model and saving as third model
x = MaxPooling2D()(model2.layers[-1].output)
o = Activation('sigmoid', name='loss')(x)
model3 = Model(input=in_img, output=[o])
its showing this error
它显示这个错误
RuntimeError: Graph disconnected: cannot obtain value for tensor input_4 at layer "input_4". The following previous layers were accessed without issue: []
回答by indraforyou
You can take the output
of the last model and create a new model. The lower layers remains the same.
您可以采用output
最后一个模型的 并创建一个新模型。下层保持不变。
model.summary()
model.layers.pop()
model.layers.pop()
model.summary()
x = MaxPooling2D()(model.layers[-1].output)
o = Activation('sigmoid', name='loss')(x)
model2 = Model(input=in_img, output=[o])
model2.summary()
Check How to use models from keras.applications for transfer learnig?
检查如何使用来自 keras.applications 的模型进行迁移学习?
Update on Edit:
更新编辑:
The new error is because you are trying to create the new model on global in_img
which is actually not used in the previous model creation.. there you are actually defining a local in_img
. So the global in_img
is obviously not connected to the upper layers in the symbolic graph. And it has nothing to do with loading weights.
新的错误是因为您试图在 global 上创建新模型,in_img
而在以前的模型创建中实际上并未使用该模型。您实际上是在定义本地in_img
. 所以全局in_img
显然没有连接到符号图中的上层。它与加载重量无关。
To better resolve this problem you should instead use model.input
to reference to the input.
为了更好地解决这个问题,您应该使用model.input
来引用输入。
model3 = Model(input=model2.input, output=[o])
model3 = Model(input=model2.input, output=[o])
回答by Wesam Na
Another way to do it
另一种方法
from keras.models import Model
layer_name = 'relu_conv2'
model2= Model(inputs=model1.input, outputs=model1.get_layer(layer_name).output)
回答by Yamaneko
As of Keras 2.3.1 and TensorFlow 2.0, model.layers.pop()
is not working as intended (see issue here). They suggested two options to do this.
从 Keras 2.3.1 和 TensorFlow 2.0 开始,model.layers.pop()
无法按预期工作(请参阅此处的问题)。他们提出了两种选择来做到这一点。
One option is to recreate the model and copy the layers. For instance, if you want to remove the last layer and add another one, you can do:
一种选择是重新创建模型并复制图层。例如,如果您想删除最后一层并添加另一层,您可以执行以下操作:
model = Sequential()
for layer in source_model.layers[:-1]: # go through until last layer
model.add(layer)
model.add(Dense(3, activation='softmax'))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy')
Another option is to use the functional model:
另一种选择是使用功能模型:
predictions = Dense(3, activation='softmax')(source_model.layers[-2].output)
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.layers[-1].output
means the last layer's output which is the final output, so in your code, you actually didn't remove any layers, you added another head/path.
model.layers[-1].output
表示最后一层的输出即最终输出,因此在您的代码中,您实际上没有删除任何层,而是添加了另一个头部/路径。