Python 如何将 L1 正则化准确添加到 tensorflow 误差函数

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/36706379/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 18:14:35  来源:igfitidea点击:

How to exactly add L1 regularisation to tensorflow error function

pythonneural-networktensorflowdeep-learning

提问by Abhishek

Hey I am new to tensorflow and even after a lot of efforts could not add L1 regularisation term to the error term

嘿,我是 tensorflow 的新手,即使经过很多努力也无法将 L1 正则化项添加到错误项中

x = tf.placeholder("float", [None, n_input])
# Weights and biases to hidden layer
ae_Wh1 = tf.Variable(tf.random_uniform((n_input, n_hidden1), -1.0 / math.sqrt(n_input), 1.0 / math.sqrt(n_input)))
ae_bh1 = tf.Variable(tf.zeros([n_hidden1]))
ae_h1 = tf.nn.tanh(tf.matmul(x,ae_Wh1) + ae_bh1)

ae_Wh2 = tf.Variable(tf.random_uniform((n_hidden1, n_hidden2), -1.0 / math.sqrt(n_hidden1), 1.0 / math.sqrt(n_hidden1)))
ae_bh2 = tf.Variable(tf.zeros([n_hidden2]))
ae_h2 = tf.nn.tanh(tf.matmul(ae_h1,ae_Wh2) + ae_bh2)

ae_Wh3 = tf.transpose(ae_Wh2)
ae_bh3 = tf.Variable(tf.zeros([n_hidden1]))
ae_h1_O = tf.nn.tanh(tf.matmul(ae_h2,ae_Wh3) + ae_bh3)

ae_Wh4 = tf.transpose(ae_Wh1)
ae_bh4 = tf.Variable(tf.zeros([n_input]))
ae_y_pred = tf.nn.tanh(tf.matmul(ae_h1_O,ae_Wh4) + ae_bh4)



ae_y_actual = tf.placeholder("float", [None,n_input])
meansq = tf.reduce_mean(tf.square(ae_y_actual - ae_y_pred))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(meansq)

after this I run the above graph using

在此之后,我使用上面的图表运行

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

n_rounds = 100
batch_size = min(500, n_samp)
for i in range(100):
    sample = np.random.randint(n_samp, size=batch_size)
    batch_xs = input_data[sample][:]
    batch_ys = output_data_ae[sample][:]
    sess.run(train_step, feed_dict={x: batch_xs, ae_y_actual:batch_ys})

Aboveis the code for a 4 layer autoencoder,"meansq" is my squared loss function. How can I add L1 reguarisation for the weight matrix (tensors) in the network?

以上是 4 层自动编码器的代码“meansq”是我的平方损失函数。如何为网络中的权重矩阵(张量)添加 L1 正则化?

回答by bruThaler

You can use TensorFlow's apply_regularizationand l1_regularizermethods. Note: this is for Tensorflow 1, and the API changed in Tensorflow 2, see edit below.

您可以使用 TensorFlow 的apply_regularizationl1_regularizer方法。注意:这是针对 Tensorflow 1 的,并且在 Tensorflow 2 中更改了 API,请参阅下面的编辑。

An example based on your question:

基于您的问题的示例:

import tensorflow as tf

total_loss = meansq #or other loss calcuation
l1_regularizer = tf.contrib.layers.l1_regularizer(
   scale=0.005, scope=None
)
weights = tf.trainable_variables() # all vars of your graph
regularization_penalty = tf.contrib.layers.apply_regularization(l1_regularizer, weights)

regularized_loss = total_loss + regularization_penalty # this loss needs to be minimized
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(regularized_loss)

Note: weightsis a listwhere each entry is a tf.Variable.

注意:weights是一个list,其中每个条目都是一个tf.Variable.

Edited: As Paddycorrectly noted, in Tensorflow 2 they changed the API for regularizers. In Tensorflow 2, L1 regularization is described here.

编辑:正如Paddy正确指出的那样,在 Tensorflow 2 中,他们更改了正则化器的 API。在 Tensorflow 2 中,这里描述L1 正则化。

回答by Philly

You can also use tf.slim.l1_regularizer() from the slim losses.

您还可以使用 tf.slim.l1_regularizer() 来自slimloss 。