Python 如何计算两个张量之间的余弦相似度?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/43357732/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 22:57:17  来源:igfitidea点击:

How to calculate the Cosine similarity between two tensors?

pythontensorflowneural-network

提问by Matias

I have two normalized tensors and I need to calculate the cosine similarity between these tensors. How do I do it with TensorFlow?

我有两个归一化张量,我需要计算这些张量之间的余弦相似度。我如何使用 TensorFlow 做到这一点?

cosine(normalize_a,normalize_b)

    a = tf.placeholder(tf.float32, shape=[None], name="input_placeholder_a")
    b = tf.placeholder(tf.float32, shape=[None], name="input_placeholder_b")
    normalize_a = tf.nn.l2_normalize(a,0)        
    normalize_b = tf.nn.l2_normalize(b,0)

回答by Miriam Farber

This will do the job:

这将完成这项工作:

a = tf.placeholder(tf.float32, shape=[None], name="input_placeholder_a")
b = tf.placeholder(tf.float32, shape=[None], name="input_placeholder_b")
normalize_a = tf.nn.l2_normalize(a,0)        
normalize_b = tf.nn.l2_normalize(b,0)
cos_similarity=tf.reduce_sum(tf.multiply(normalize_a,normalize_b))
sess=tf.Session()
cos_sim=sess.run(cos_similarity,feed_dict={a:[1,2,3],b:[2,4,6]})

This prints 0.99999988

这打印 0.99999988

回答by Rajarshee Mitra

Times change. With the latest TF API, this can be computed by calling tf.losses.cosine_distance.

时代在变。使用最新的 TF API,可以通过调用tf.losses.cosine_distance.

Example:

例子:

import tensorflow as tf
import numpy as np


x = tf.constant(np.random.uniform(-1, 1, 10)) 
y = tf.constant(np.random.uniform(-1, 1, 10))
s = tf.losses.cosine_distance(tf.nn.l2_normalize(x, 0), tf.nn.l2_normalize(y, 0), dim=0)
print(tf.Session().run(s))

Of course, 1 - sis the cosine similarity!

当然1 - s是余弦相似度!

回答by Andrew LD

You can normalize you vector or matrix like this:

您可以像这样对向量或矩阵进行归一化:

[batch_size*hidden_num]
states_norm=tf.nn.l2_normalize(states,dim=1)
[batch_size * embedding_dims]
embedding_norm=tf.nn.l2_normalize(embedding,dim=1)
#assert hidden_num == embbeding_dims
after mat [batch_size*embedding]
user_app_scores = tf.matmul(states_norm,embedding_norm,transpose_b=True)