Python sklearn:TFIDF 转换器:如何获取文档中给定单词的 tf-idf 值

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/34449127/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 14:59:59  来源:igfitidea点击:

sklearn : TFIDF Transformer : How to get tf-idf values of given words in document

pythonscikit-learn

提问by maximus

I used sklearnfor calculating TFIDF (Term frequency inverse document frequency) values for documents using command as :

我使用sklearn计算文档的 TFIDF(术语频率逆文档频率)值,使用命令为:

from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(documents)
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)

X_train_tfis a scipy.sparsematrix of shape (2257, 35788).

X_train_tf是一个scipy.sparse形状矩阵(2257, 35788)

How can I get TF-IDF for words in a particular document? More specific, how to get words with maximum TF-IDF values in a given document?

如何为特定文档中的单词获取 TF-IDF?更具体地说,如何在给定文档中获取具有最大 TF-IDF 值的单词?

采纳答案by sud_

You can use TfidfVectorizer from sklean

您可以使用 sklean 的 TfidfVectorizer

from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
from scipy.sparse.csr import csr_matrix #need this if you want to save tfidf_matrix

tf = TfidfVectorizer(input='filename', analyzer='word', ngram_range=(1,6),
                     min_df = 0, stop_words = 'english', sublinear_tf=True)
tfidf_matrix =  tf.fit_transform(corpus)

The above tfidf_matix has the TF-IDF values of all the documents in the corpus. This is a big sparse matrix. Now,

上面的tfidf_matix有语料库中所有文档的TF-IDF值。这是一个很大的稀疏矩阵。现在,

feature_names = tf.get_feature_names()

this gives you the list of all the tokens or n-grams or words. For the first document in your corpus,

这为您提供了所有标记或 n-gram 或单词的列表。对于语料库中的第一个文档,

doc = 0
feature_index = tfidf_matrix[doc,:].nonzero()[1]
tfidf_scores = zip(feature_index, [tfidf_matrix[doc, x] for x in feature_index])

Lets print them,

让我们打印它们,

for w, s in [(feature_names[i], s) for (i, s) in tfidf_scores]:
  print w, s

回答by 8cold8hot

Here is another simpler solution in Python 3 with pandas library

这是 Python 3 中带有 Pandas 库的另一个更简单的解决方案

from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd

vect = TfidfVectorizer()
tfidf_matrix = vect.fit_transform(documents)
df = pd.DataFrame(tfidf_matrix.toarray(), columns = vect.get_feature_names())
print(df)

回答by anshu kumar

Finding tfidf score per word in a sentence can help in doing downstream task like search and semantics matching.

查找句子中每个单词的 tfidf 分数可以帮助执行下游任务,例如搜索和语义匹配。

We can we get dictionary where word as key and tfidf_score as value.

我们可以得到字典,其中单词为键,tfidf_score 为值。

from sklearn.feature_extraction.text import TfidfVectorizer

tfidf = TfidfVectorizer(min_df=3)
tfidf.fit(list(subject_sentences.values()))
feature_names = tfidf.get_feature_names()

Now we can write the transformation logic like this

现在我们可以像这样编写转换逻辑

def get_ifidf_for_words(text):
    tfidf_matrix= tfidf.transform([text]).todense()
    feature_index = tfidf_matrix[0,:].nonzero()[1]
    tfidf_scores = zip([feature_names[i] for i in feature_index], [tfidf_matrix[0, x] for x in feature_index])
    return dict(tfidf_scores)

E.g. For a input

例如对于输入

text = "increase post character limit"
get_ifidf_for_words(text)

output would be

输出将是

{
'character': 0.5478868741621505,
'increase': 0.5487092618866405,
'limit': 0.5329156819959756,
'post': 0.33873144956352985
}