Python 如何在 Keras 中获取图层的权重?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/43715047/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How do I get the weights of a layer in Keras?
提问by Toke Faurby
I am using Windows 10, Python 3.5, and tensorflow 1.1.0. I have the following script:
我使用的是 Windows 10、Python 3.5 和 tensorflow 1.1.0。我有以下脚本:
import tensorflow as tf
import tensorflow.contrib.keras.api.keras.backend as K
from tensorflow.contrib.keras.api.keras.layers import Dense
tf.reset_default_graph()
init = tf.global_variables_initializer()
sess = tf.Session()
K.set_session(sess) # Keras will use this sesssion to initialize all variables
input_x = tf.placeholder(tf.float32, [None, 10], name='input_x')
dense1 = Dense(10, activation='relu')(input_x)
sess.run(init)
dense1.get_weights()
I get the error: AttributeError: 'Tensor' object has no attribute 'weights'
我收到错误: AttributeError: 'Tensor' object has no attribute 'weights'
What am I doing wrong, and how do I get the weights of dense1
? I have look at thisand thisSO post, but I still can't make it work.
采纳答案by Francois
If you write:
如果你写:
dense1 = Dense(10, activation='relu')(input_x)
dense1 = Dense(10, activation='relu')(input_x)
Then dense1
is not a layer, it's the output of a layer. The layer is Dense(10, activation='relu')
然后dense1
不是一层,它是一层的输出。该层是Dense(10, activation='relu')
So it seems you meant:
所以看来你的意思是:
dense1 = Dense(10, activation='relu')
y = dense1(input_x)
Here is a full snippet:
这是一个完整的片段:
import tensorflow as tf
from tensorflow.contrib.keras import layers
input_x = tf.placeholder(tf.float32, [None, 10], name='input_x')
dense1 = layers.Dense(10, activation='relu')
y = dense1(input_x)
weights = dense1.get_weights()
回答by Onno Kampman
If you want to get weights and biases of all layers, you can simply use:
如果你想获得所有层的权重和偏差,你可以简单地使用:
for layer in model.layers: print(layer.get_config(), layer.get_weights())
This will print all information that's relevant.
这将打印所有相关信息。
If you want the weights directly returned as numpy arrays, you can use:
如果您希望权重直接作为 numpy 数组返回,您可以使用:
first_layer_weights = model.layers[0].get_weights()[0]
first_layer_biases = model.layers[0].get_weights()[1]
second_layer_weights = model.layers[1].get_weights()[0]
second_layer_biases = model.layers[1].get_weights()[1]
etc.
等等。
回答by Eric M
If you want to see how the weights and biases of your layer change over time, you can add a callback to record their values at each training epoch.
如果您想查看层的权重和偏差如何随时间变化,您可以添加一个回调来记录每个训练时期的值。
Using a model like this for example,
例如,使用这样的模型,
import numpy as np
model = Sequential([Dense(16, input_shape=(train_inp_s.shape[1:])), Dense(12), Dense(6), Dense(1)])
add the callbacks **kwarg during fitting:
在拟合期间添加回调 **kwarg:
gw = GetWeights()
model.fit(X, y, validation_split=0.15, epochs=10, batch_size=100, callbacks=[gw])
where the callback is defined by
其中回调定义为
class GetWeights(Callback):
# Keras callback which collects values of weights and biases at each epoch
def __init__(self):
super(GetWeights, self).__init__()
self.weight_dict = {}
def on_epoch_end(self, epoch, logs=None):
# this function runs at the end of each epoch
# loop over each layer and get weights and biases
for layer_i in range(len(self.model.layers)):
w = self.model.layers[layer_i].get_weights()[0]
b = self.model.layers[layer_i].get_weights()[1]
print('Layer %s has weights of shape %s and biases of shape %s' %(
layer_i, np.shape(w), np.shape(b)))
# save all weights and biases inside a dictionary
if epoch == 0:
# create array to hold weights and biases
self.weight_dict['w_'+str(layer_i+1)] = w
self.weight_dict['b_'+str(layer_i+1)] = b
else:
# append new weights to previously-created weights array
self.weight_dict['w_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['w_'+str(layer_i+1)], w))
# append new weights to previously-created weights array
self.weight_dict['b_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['b_'+str(layer_i+1)], b))
This callback will build a dictionary with all the layer weights and biases, labeled by the layer numbers, so you can see how they change over time as your model is being trained. You'll notice the shape of each weight and bias array depends on the shape of the model layer. One weights array and one bias array are saved for each layer in your model. The third axis (depth) shows their evolution over time.
此回调将构建一个包含所有层权重和偏差的字典,由层数标记,因此您可以看到它们在模型训练时随时间的变化。您会注意到每个权重和偏置数组的形状取决于模型层的形状。为模型中的每一层保存一个权重数组和一个偏置数组。第三个轴(深度)显示了它们随时间的演变。
Here we used 10 epochs and a model with layers of 16, 12, 6, and 1 neurons:
在这里,我们使用了 10 个 epoch 和一个具有 16、12、6 和 1 个神经元层的模型:
for key in gw.weight_dict:
print(str(key) + ' shape: %s' %str(np.shape(gw.weight_dict[key])))
w_1 shape: (5, 16, 10)
b_1 shape: (1, 16, 10)
w_2 shape: (16, 12, 10)
b_2 shape: (1, 12, 10)
w_3 shape: (12, 6, 10)
b_3 shape: (1, 6, 10)
w_4 shape: (6, 1, 10)
b_4 shape: (1, 1, 10)