Python 我的交叉熵函数实现有什么问题?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/47377222/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
What is the problem with my implementation of the cross-entropy function?
提问by Jassy.W
I am learning the neural network and I want to write a function cross_entropy
in python. Where it is defined as
我正在学习神经网络,我想cross_entropy
用 python编写一个函数。它被定义为
where N
is the number of samples, k
is the number of classes, log
is the natural logarithm, t_i,j
is 1 if sample i
is in class j
and 0
otherwise, and p_i,j
is the predicted probability that sample i
is in class j
.
To avoid numerical issues with logarithm, clip the predictions to [10^{?12}, 1 ? 10^{?12}]
range.
其中N
是样本数,是k
类别log
数,t_i,j
是自然对数,如果样本i
在类别中,则为 1 j
,0
否则,p_i,j
是样本i
在类别 中的预测概率j
。为避免对数的数值问题,请将预测剪裁到[10^{?12}, 1 ? 10^{?12}]
范围内。
According to the above description, I wrote down the codes by clipping the predictions to [epsilon, 1 ? epsilon]
range, then computing the cross_entropy based on the above formula.
根据上面的描述,我通过将预测裁剪到[epsilon, 1 ? epsilon]
范围来写下代码,然后根据上面的公式计算 cross_entropy。
def cross_entropy(predictions, targets, epsilon=1e-12):
"""
Computes cross entropy between targets (encoded as one-hot vectors)
and predictions.
Input: predictions (N, k) ndarray
targets (N, k) ndarray
Returns: scalar
"""
predictions = np.clip(predictions, epsilon, 1. - epsilon)
ce = - np.mean(np.log(predictions) * targets)
return ce
The following code will be used to check if the function cross_entropy
are correct.
下面的代码将用于检查函数cross_entropy
是否正确。
predictions = np.array([[0.25,0.25,0.25,0.25],
[0.01,0.01,0.01,0.96]])
targets = np.array([[0,0,0,1],
[0,0,0,1]])
ans = 0.71355817782 #Correct answer
x = cross_entropy(predictions, targets)
print(np.isclose(x,ans))
The output of the above codes is False, that to say my codes for defining the function cross_entropy
is not correct. Then I print the result of cross_entropy(predictions, targets)
. It gave 0.178389544455
and the correct result should be ans = 0.71355817782
. Could anybody help me to check what is the problem with my codes?
上面代码的输出是假的,也就是说我定义函数的代码cross_entropy
不正确。然后我打印cross_entropy(predictions, targets)
. 它给出0.178389544455
了正确的结果应该是ans = 0.71355817782
. 有人可以帮我检查一下我的代码有什么问题吗?
回答by Dascienz
You're not that far off at all, but remember you are taking the average value of N sums, where N = 2 (in this case). So your code could read:
您根本没有那么远,但请记住,您正在取 N 个总和的平均值,其中 N = 2(在本例中)。所以你的代码可以是:
def cross_entropy(predictions, targets, epsilon=1e-12):
"""
Computes cross entropy between targets (encoded as one-hot vectors)
and predictions.
Input: predictions (N, k) ndarray
targets (N, k) ndarray
Returns: scalar
"""
predictions = np.clip(predictions, epsilon, 1. - epsilon)
N = predictions.shape[0]
ce = -np.sum(targets*np.log(predictions+1e-9))/N
return ce
predictions = np.array([[0.25,0.25,0.25,0.25],
[0.01,0.01,0.01,0.96]])
targets = np.array([[0,0,0,1],
[0,0,0,1]])
ans = 0.71355817782 #Correct answer
x = cross_entropy(predictions, targets)
print(np.isclose(x,ans))
Here, I think it's a little clearer if you stick with np.sum()
. Also, I added 1e-9 into the np.log()
to avoid the possibility of having a log(0) in your computation. Hope this helps!
在这里,我认为如果您坚持使用np.sum()
. 此外,我将 1e-9 添加到 中np.log()
以避免在计算中出现 log(0) 的可能性。希望这可以帮助!
NOTE: As per @Peter's comment, the offset of 1e-9
is indeed redundant if your epsilon value is greater than 0
.
注意:根据@Peter 的评论,1e-9
如果您的 epsilon 值大于,则偏移量确实是多余的0
。
回答by Peter
def cross_entropy(x, y):
""" Computes cross entropy between two distributions.
Input: x: iterabale of N non-negative values
y: iterabale of N non-negative values
Returns: scalar
"""
if np.any(x < 0) or np.any(y < 0):
raise ValueError('Negative values exist.')
# Force to proper probability mass function.
x = np.array(x, dtype=np.float)
y = np.array(y, dtype=np.float)
x /= np.sum(x)
y /= np.sum(y)
# Ignore zero 'y' elements.
mask = y > 0
x = x[mask]
y = y[mask]
ce = -np.sum(x * np.log(y))
return ce
def cross_entropy_via_scipy(x, y):
''' SEE: https://en.wikipedia.org/wiki/Cross_entropy'''
return entropy(x) + entropy(x, y)
from scipy.stats import entropy, truncnorm
x = truncnorm.rvs(0.1, 2, size=100)
y = truncnorm.rvs(0.1, 2, size=100)
print np.isclose(cross_entropy(x, y), cross_entropy_via_scipy(x, y))