Python NLTK:SyntaxError:文件中的非 ASCII 字符 '\xc3'(情绪分析 -NLP)
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/26899235/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Python NLTK: SyntaxError: Non-ASCII character '\xc3' in file (Sentiment Analysis -NLP)
提问by rkbom9
I am playing around with NLTK to do an assignment on sentiment analysis. I am using Python 2.7. NLTK 3.0 and NumPy1.9.1 version.
我在玩 NLTK 来做一项关于情绪分析的任务。我正在使用 Python 2.7。NLTK 3.0 和 NumPy1.9.1 版本。
This is the code :
这是代码:
__author__ = 'karan'
import nltk
import re
import sys
def main():
print("Start");
# getting the stop words
stopWords = open("english.txt","r");
stop_word = stopWords.read().split();
AllStopWrd = []
for wd in stop_word:
AllStopWrd.append(wd);
print("stop words-> ",AllStopWrd);
# sample and also cleaning it
tweet1= 'Love, my new toyí??í?í??í?#iPhone6. Its good https://twitter.com/Sandra_Ortega/status/513807261769424897/photo/1'
print("old tweet-> ",tweet1)
tweet1 = tweet1.lower()
tweet1 = ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)"," ",tweet1).split())
print(tweet1);
tw = tweet1.split()
print(tw)
#tokenize
sentences = nltk.word_tokenize(tweet1)
print("tokenized ->", sentences)
#remove stop words
Otweet =[]
for w in tw:
if w not in AllStopWrd:
Otweet.append(w);
print("sans stop word-> ",Otweet)
# get taggers for neg/pos/inc/dec/inv words
taggers ={}
negWords = open("neg.txt","r");
neg_word = negWords.read().split();
print("ned words-> ",neg_word)
posWords = open("pos.txt","r");
pos_word = posWords.read().split();
print("pos words-> ",pos_word)
incrWords = open("incr.txt","r");
inc_word = incrWords.read().split();
print("incr words-> ",inc_word)
decrWords = open("decr.txt","r");
dec_word = decrWords.read().split();
print("dec wrds-> ",dec_word)
invWords = open("inverse.txt","r");
inv_word = invWords.read().split();
print("inverse words-> ",inv_word)
for nw in neg_word:
taggers.update({nw:'negative'});
for pw in pos_word:
taggers.update({pw:'positive'});
for iw in inc_word:
taggers.update({iw:'inc'});
for dw in dec_word:
taggers.update({dw:'dec'});
for ivw in inv_word:
taggers.update({ivw:'inv'});
print("tagger-> ",taggers)
print(taggers.get('little'))
# get parts of speech
posTagger = [nltk.pos_tag(tw)]
print("posTagger-> ",posTagger)
main();
This is the error that I am getting when running my code:
这是我在运行代码时遇到的错误:
SyntaxError: Non-ASCII character '\xc3' in file C:/Users/karan/PycharmProjects/mainProject/sentiment.py on line 19, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
How do I fix this error?
我该如何解决这个错误?
I also tried the code using Python 3.4.2 and with NLTK 3.0 and NumPy 1.9.1 but then I get the error:
我还使用 Python 3.4.2 和 NLTK 3.0 和 NumPy 1.9.1 尝试了代码,但随后出现错误:
Traceback (most recent call last):
File "C:/Users/karan/PycharmProjects/mainProject/sentiment.py", line 80, in <module>
main();
File "C:/Users/karan/PycharmProjects/mainProject/sentiment.py", line 72, in main
posTagger = [nltk.pos_tag(tw)]
File "C:\Python34\lib\site-packages\nltk\tag\__init__.py", line 100, in pos_tag
tagger = load(_POS_TAGGER)
File "C:\Python34\lib\site-packages\nltk\data.py", line 779, in load
resource_val = pickle.load(opened_resource)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcb in position 0: ordinal not in range(128)
采纳答案by Padraic Cunningham
Add the following to the top of your file # coding=utf-8
将以下内容添加到文件顶部 # coding=utf-8
If you go to the link in the error you can seen the reason why:
如果您转到错误中的链接,您可以看到原因:
Defining the Encoding
定义编码
Python will default to ASCII as standard encoding if no other encoding hints are given. To define a source code encoding, a magic comment must be placed into the source files either as first or second line in the file, such as: # coding=
如果没有给出其他编码提示,Python 将默认使用 ASCII 作为标准编码。要定义源代码编码,必须将魔术注释作为文件的第一行或第二行放入源文件中,例如:#coding=

![Python“从[点]包导入...”语法](/res/img/loading.gif)