Python 如何在张量流中设置rmse成本函数
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/33846069/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
how to set rmse cost function in tensorflow
提问by Vikash Singh
I have cost function in tensorflow.
我在张量流中有成本函数。
activation = tf.add(tf.mul(X, W), b)
cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function
I am trying out this example. How can I change it to rmse cost function?
我正在尝试这个例子。如何将其更改为 rmse 成本函数?
采纳答案by Rajarshee Mitra
tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(targets, outputs))))
And slightly simplified (TensorFlow overloads the most important operators):
并稍微简化(TensorFlow 重载了最重要的运算符):
tf.sqrt(tf.reduce_mean((targets - outputs)**2))
回答by dga
(1) Are you sure you need this? Minimizing the l2 losswill give you the same result as minimizing the RMSE error. (Walk through the math: You don't need to take the square root, because minimizing x^2 still minimizes x for x>0, and you know that the sum of a bunch of squares is positive. Minimizing x*n minimizes x for constant n).
(1) 你确定你需要这个吗?最小化l2 损失会给您与最小化 RMSE 误差相同的结果。(遍历数学:你不需要取平方根,因为最小化 x^2 仍然最小化 x for x>0,并且你知道一堆平方的和是正的。最小化 x*n 最小化 x对于常数 n)。
(2) If you need to know the numerical value of the RMSE error, then implement it directly from the definition of RMSE:
(2) 如果需要知道RMSE误差的数值,那么直接从RMSE的定义中实现:
tf.sqrt(tf.reduce_sum(...)/n)
(You need to know or calculate n - the number of elements in the sum, and set the reduction axis appropriately in the call to reduce_sum).
(您需要知道或计算 n - 总和中的元素数,并在对 reduce_sum 的调用中适当设置缩减轴)。
回答by Salvador Dali
The formula for root mean square erroris:
均方根误差的公式为:
The way to implement it in TF is tf.sqrt(tf.reduce_mean(tf.squared_difference(Y1, Y2)))
.
在 TF 中实现它的方法是tf.sqrt(tf.reduce_mean(tf.squared_difference(Y1, Y2)))
.
The important thing to remember is that there is no need to minimize RMSE loss with the optimizer. With the same result you can minimize just tf.reduce_mean(tf.squared_difference(Y1, Y2))
or even tf.reduce_sum(tf.squared_difference(Y1, Y2))
but because they have a smaller graph of operations, they will be optimized faster.
要记住的重要一点是,无需使用优化器最小化 RMSE 损失。使用相同的结果,您可以最小化tf.reduce_mean(tf.squared_difference(Y1, Y2))
甚至最小化,tf.reduce_sum(tf.squared_difference(Y1, Y2))
但因为它们的操作图更小,所以它们的优化速度会更快。
But you can use this function if you just want to tract the value of RMSE.
但是,如果您只想提取 RMSE 的值,则可以使用此函数。
回答by pjh
Now we have tf.losses.mean_squared_error
现在我们有 tf.losses.mean_squared_error
Therefore,
所以,
RMSE = tf.sqrt(tf.losses.mean_squared_error(label, prediction))