主题分布:python中做LDA后如何查看哪个文档属于哪个主题
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/20984841/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Topic distribution: How do we see which document belong to which topic after doing LDA in python
提问by jxn
I am able to run the LDA code from gensim and got the top 10 topics with their respective keywords.
我能够从 gensim 运行 LDA 代码,并使用各自的关键字获得前 10 个主题。
Now I would like to go a step further to see how accurate the LDA algo is by seeing which document they cluster into each topic. Is this possible in gensim LDA?
现在,我想更进一步,通过查看 LDA 算法将哪个文档聚集到每个主题中来了解它们的准确度。这在 gensim LDA 中可能吗?
Basically i would like to do something like this, but in python and using gensim.
基本上我想做这样的事情,但是在python中并使用gensim。
LDA with topicmodels, how can I see which topics different documents belong to?
采纳答案by alvas
Using the probabilities of the topics, you can try to set some threshold and use it as a clustering baseline, but i am sure there are better ways to do clustering than this 'hacky' method.
使用主题的概率,您可以尝试设置一些阈值并将其用作聚类基线,但我相信有比这种“hacky”方法更好的聚类方法。
from gensim import corpora, models, similarities
from itertools import chain
""" DEMO """
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
all_tokens = sum(texts, [])
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once] for text in texts]
# Create Dictionary.
id2word = corpora.Dictionary(texts)
# Creates the Bag of Word corpus.
mm = [id2word.doc2bow(text) for text in texts]
# Trains the LDA models.
lda = models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=3, \
update_every=1, chunksize=10000, passes=1)
# Prints the topics.
for top in lda.print_topics():
print top
print
# Assigns the topics to the documents in corpus
lda_corpus = lda[mm]
# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = list(chain(*[[score for topic_id,score in topic] \
for topic in [doc for doc in lda_corpus]]))
threshold = sum(scores)/len(scores)
print threshold
print
cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]
print cluster1
print cluster2
print cluster3
[out]:
[out]:
0.131*trees + 0.121*graph + 0.119*system + 0.115*user + 0.098*survey + 0.082*interface + 0.080*eps + 0.064*minors + 0.056*response + 0.056*computer
0.171*time + 0.171*user + 0.170*response + 0.082*survey + 0.080*computer + 0.079*system + 0.050*trees + 0.042*graph + 0.040*minors + 0.040*human
0.155*system + 0.150*human + 0.110*graph + 0.107*minors + 0.094*trees + 0.090*eps + 0.088*computer + 0.087*interface + 0.040*survey + 0.028*user
0.333333333333
['The EPS user interface management system', 'The generation of random binary unordered trees', 'The intersection graph of paths in trees', 'Graph minors A survey']
['A survey of user opinion of computer system response time', 'Relation of user perceived response time to error measurement']
['Human machine interface for lab abc computer applications', 'System and human system engineering testing of EPS', 'Graph minors IV Widths of trees and well quasi ordering']
Just to make it clearer:
只是为了更清楚:
# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = []
for doc in lda_corpus
for topic in doc:
for topic_id, score in topic:
scores.append(score)
threshold = sum(scores)/len(scores)
The above code is sum the score of all words and in all topics for all documents. Then normalize the sum by the number of scores.
上面的代码是对所有文档的所有单词和所有主题的得分求和。然后通过分数的数量对总和进行归一化。
回答by nos
If you want to use the trick of
如果你想使用
cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]
in the previous answer by alvas, make sure to set minimum_probability=0 in LdaModel
在 alvas 的上一个回答中,确保在 LdaModel 中设置 minimum_probability=0
gensim.models.ldamodel.LdaModel(corpus,
num_topics=num_topics, id2word = dictionary,
passes=2, minimum_probability=0)
Otherwise the dimension of lda_corpus and documents may not agree since gensim will suppress any corpus with probability lower than minimum_probability.
否则 lda_corpus 和文档的维度可能不一致,因为 gensim 会抑制任何概率低于 minimum_probability 的语料库。
An alternative way to group documents into topics is to assign topics according to the maximum probability
将文档分组为主题的另一种方法是根据最大概率分配主题
lda_corpus = [max(prob,key=lambda y:y[1])
for prob in lda[mm] ]
playlists = [[] for i in xrange(topic_num])]
for i, x in enumerate(lda_corpus):
playlists[x[0]].append(documents[i])
Note lda[mm]is roughly speaking a list of lists, or 2D matrix. The number of rows is the number of documents and the number of columns is the number of topics. Each matrix element is a tuple of the form (3,0.82)for example. Here 3 refers to the topic index and 0.82 the corresponding probability to be of that topic. By default, minimum_probability=0.01and any tuple with probability less than 0.01 is omitted in lda[mm]. You can set it to be 1/#topics if you use the grouping method with maximum probability.
注意lda[mm]粗略地说是列表列表或二维矩阵。行数是文档数,列数是主题数。例如,每个矩阵元素都是一个元组形式(3,0.82)。这里 3 是指主题索引,0.82 是该主题的相应概率。默认情况下,minimum_probability=0.01任何概率小于 0.01 的元组在lda[mm]. 如果使用最大概率的分组方法,则可以将其设置为 1/#topics。
回答by ayushi
lda_corpus[i][j] are of the form [(0,t1),(0,t2)...,(0,t10),....(n,t10)] where the 1st term denotes the document index and the 2nd term denotes the probability of the topic in that particular document.
lda_corpus[i][j] 的形式为 [(0,t1),(0,t2)...,(0,t10),....(n,t10)] 其中第 1 项表示文档index 和第二项表示该主题在该特定文档中的概率。

