如何在 Pandas 数据帧上为 Twitter 数据应用 NLTK word_tokenize 库?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/44173624/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-14 03:41:36  来源:igfitidea点击:

How to apply NLTK word_tokenize library on a Pandas dataframe for Twitter data?

pythonpandastwitternltktokenize

提问by Vic13

This is the Code that I am using for semantic analysis of twitter:-

这是我用于 Twitter 语义分析的代码:-

import pandas as pd
import datetime
import numpy as np
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer

df=pd.read_csv('twitDB.csv',header=None, 
sep=',',error_bad_lines=False,encoding='utf-8')

hula=df[[0,1,2,3]]
hula=hula.fillna(0)
hula['tweet'] = hula[0].astype(str) 
+hula[1].astype(str)+hula[2].astype(str)+hula[3].astype(str) 
hula["tweet"]=hula.tweet.str.lower()

ho=hula["tweet"]
ho = ho.replace('\s+', ' ', regex=True) 
ho=ho.replace('\.+', '.', regex=True)
special_char_list = [':', ';', '?', '}', ')', '{', '(']
for special_char in special_char_list:
ho=ho.replace(special_char, '')
print(ho)

ho = ho.replace('((www\.[\s]+)|(https?://[^\s]+))','URL',regex=True)
ho =ho.replace(r'#([^\s]+)', r'', regex=True)
ho =ho.replace('\'"',regex=True)

lem = WordNetLemmatizer()
stem = PorterStemmer()
fg=stem.stem(a)

eng_stopwords = stopwords.words('english') 
ho = ho.to_frame(name=None)
a=ho.to_string(buf=None, columns=None, col_space=None, header=True, 
index=True, na_rep='NaN', formatters=None, float_format=None, 
sparsify=False, index_names=True, justify=None, line_width=None, 
max_rows=None, max_cols=None, show_dimensions=False)
wordList = word_tokenize(fg)                                     
wordList = [word for word in wordList if word not in eng_stopwords]  
print (wordList)

Input i.e. a :-

输入即 a :-

                                              tweet
0     1495596971.6034188::automotive auto ebc greens...
1     1495596972.330948::new free stock photo of cit...

getting output ( wordList) in this format:-

以这种格式获取输出(wordList):-

tweet
 0
1495596971.6034188
:
:automotive
auto

I want the output of a row in a row format only. How can I do it? If you have a better code for semantic analysis of twitter please share it with me.

我只想要行格式的行输出。我该怎么做?如果你有更好的推特语义分析代码,请与我分享。

回答by alvas

In short:

简而言之:

df['Text'].apply(word_tokenize)

Or if you want to add another column to store the tokenized list of strings:

或者,如果您想添加另一列来存储字符串的标记化列表:

df['tokenized_text'] = df['Text'].apply(word_tokenize) 

There are tokenizers written specifically for twitter text, see http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual

有专门为 twitter 文本编写的标记器,请参阅http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual

To use nltk.tokenize.TweetTokenizer:

使用nltk.tokenize.TweetTokenizer

from nltk.tokenize import TweetTokenizer
tt = TweetTokenizer()
df['Text'].apply(tt.tokenize)

Similar to:

相似: