Python 如何在数据框中使用 word_tokenize

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/33098040/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 12:47:21  来源:igfitidea点击:

how to use word_tokenize in data frame

pythonpandasnltk

提问by eclairs

I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a particular row of the dataframe.

我最近开始使用 nltk 模块进行文本分析。我被困在了一个点上。我想在数据帧上使用 word_tokenize,以便获取数据帧特定行中使用的所有单词。

data example:
       text
1.   This is a very good site. I will recommend it to others.
2.   Can you please give me a call at 9983938428. have issues with the listings.
3.   good work! keep it up
4.   not a very helpful site in finding home decor. 

expected output:

1.   'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2.   'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3.   'good','work','!','keep','it','up'
4.   'not','a','very','helpful','site','in','finding','home','decor'

Basically, i want to separate all the words and find the length of each text in the dataframe.

基本上,我想分离所有单词并找到数据框中每个文本的长度。

I know word_tokenize can for it for a string, but how to apply it onto the entire dataframe?

我知道 word_tokenize 可以用于字符串,但是如何将其应用于整个数据帧?

Please help!

请帮忙!

Thanks in advance...

提前致谢...

采纳答案by Gregg

You can use applymethod of DataFrame API:

您可以使用DataFrame API 的apply方法:

import pandas as pd
import nltk

df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)

Output:

输出:

>>> df
                                           sentences  \
0  This is a very good site. I will recommend it ...   
1  Can you please give me a call at 9983938428. h...   
2                              good work! keep it up   

                                     tokenized_sents  
0  [This, is, a, very, good, site, ., I, will, re...  
1  [Can, you, please, give, me, a, call, at, 9983...  
2                      [good, work, !, keep, it, up]

For finding the length of each text try to use applyand lambda functionagain:

要查找每个文本的长度,请尝试再次使用applylambda 函数

df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)

>>> df
                                           sentences  \
0  This is a very good site. I will recommend it ...   
1  Can you please give me a call at 9983938428. h...   
2                              good work! keep it up   

                                     tokenized_sents  sents_length  
0  [This, is, a, very, good, site, ., I, will, re...            14  
1  [Can, you, please, give, me, a, call, at, 9983...            15  
2                      [good, work, !, keep, it, up]             6  

回答by Harsha Manjunath

pandas.Series.applyis faster than pandas.DataFrame.apply

pandas.Series.apply比 pandas.DataFrame.apply 快

import pandas as pd
import nltk

df = pd.read_csv("/path/to/file.csv")

start = time.time()
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
print "series.apply", (time.time() - start)

start = time.time()
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
print "dataframe.apply", (time.time() - start)

On a sample 125 MB csv file,

在示例 125 MB csv 文件中,

series.apply 144.428858995

系列.申请 144.428858995

dataframe.apply 201.884778976

dataframe.apply 201.884778976

Edit: You could be thinking the Dataframe dfafter series.apply(nltk.word_tokenize)is larger in size, which might affect the runtime for the next operation dataframe.apply(nltk.word_tokenize).

编辑:您可能认为series.apply(nltk.word_tokenize)之后的 Dataframe df 的尺寸更大,这可能会影响下一个操作dataframe.apply(nltk.word_tokenize)的运行时间。

Pandas optimizes under the hood for such a scenario. I got a similar runtime of 200sby only performing dataframe.apply(nltk.word_tokenize) separately.

Pandas 针对这种情况在幕后进行了优化。我仅通过单独执行 dataframe.apply(nltk.word_tokenize)获得了类似的200秒运行时间。

回答by Bryce Chamberlain

May need to add str() to convert to pandas' object type to a string.

可能需要添加 str() 以将 pandas 的对象类型转换为字符串。

Keep in mind a faster way to count words is often to count spaces.

请记住,计算单词的更快方法通常是计算空格。

Interesting that tokenizer counts periods. May want to remove those first, maybe also remove numbers. Un-commenting the line below will result in equal counts, at least in this case.

有趣的是分词器计算句点。可能想先删除那些,也可能删除数字。取消注释下面的行将导致相等的计数,至少在这种情况下。

import nltk
import pandas as pd

sentences = pd.Series([ 
    'This is a very good site. I will recommend it to others.',
    'Can you please give me a call at 9983938428. have issues with the listings.',
    'good work! keep it up',
    'not a very helpful site in finding home decor. '
])

# remove anything but characters and spaces
sentences = sentences.str.replace('[^A-z ]','').str.replace(' +',' ').str.strip()

splitwords = [ nltk.word_tokenize( str(sentence) ) for sentence in sentences ]
print(splitwords)
    # output: [['This', 'is', 'a', 'very', 'good', 'site', 'I', 'will', 'recommend', 'it', 'to', 'others'], ['Can', 'you', 'please', 'give', 'me', 'a', 'call', 'at', 'have', 'issues', 'with', 'the', 'listings'], ['good', 'work', 'keep', 'it', 'up'], ['not', 'a', 'very', 'helpful', 'site', 'in', 'finding', 'home', 'decor']]

wordcounts = [ len(words) for words in splitwords ]
print(wordcounts)
    # output: [12, 13, 5, 9]

wordcounts2 = [ sentence.count(' ') + 1 for sentence in sentences ]
print(wordcounts2)
    # output: [12, 13, 5, 9]

If you aren't using Pandas, you might not need str()

如果您不使用 Pandas,则可能不需要 str()