删除 Python 注释/文档字符串的脚本
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1769332/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Script to remove Python comments/docstrings
提问by Anurag Uniyal
Is there a Python script or tool available which can remove comments and docstrings from Python source?
是否有可用的 Python 脚本或工具可以从 Python 源中删除注释和文档字符串?
It should take care of cases like:
它应该处理以下情况:
"""
aas
"""
def f():
m = {
u'x':
u'y'
} # faake docstring ;)
if 1:
'string' >> m
if 2:
'string' , m
if 3:
'string' > m
So at last I have come up with a simple script, which uses the tokenize module and removes comment tokens. It seems to work pretty well, except that I am not able to remove docstrings in all cases. See if you can improve it to remove docstrings.
所以最后我想出了一个简单的脚本,它使用 tokenize 模块并删除注释标记。它似乎工作得很好,只是我无法在所有情况下都删除文档字符串。看看您是否可以改进它以删除文档字符串。
import cStringIO
import tokenize
def remove_comments(src):
"""
This reads tokens using tokenize.generate_tokens and recombines them
using tokenize.untokenize, and skipping comment/docstring tokens in between
"""
f = cStringIO.StringIO(src)
class SkipException(Exception): pass
processed_tokens = []
last_token = None
# go thru all the tokens and try to skip comments and docstrings
for tok in tokenize.generate_tokens(f.readline):
t_type, t_string, t_srow_scol, t_erow_ecol, t_line = tok
try:
if t_type == tokenize.COMMENT:
raise SkipException()
elif t_type == tokenize.STRING:
if last_token is None or last_token[0] in [tokenize.INDENT]:
# FIXEME: this may remove valid strings too?
#raise SkipException()
pass
except SkipException:
pass
else:
processed_tokens.append(tok)
last_token = tok
return tokenize.untokenize(processed_tokens)
Also I would like to test it on a very large collection of scripts with good unit test coverage. Can you suggest such a open source project?
此外,我想在具有良好单元测试覆盖率的大量脚本上对其进行测试。你能推荐一个这样的开源项目吗?
采纳答案by Ned Batchelder
This does the job:
这可以完成以下工作:
""" Strip comments and docstrings from a file.
"""
import sys, token, tokenize
def do_file(fname):
""" Run on just one file.
"""
source = open(fname)
mod = open(fname + ",strip", "w")
prev_toktype = token.INDENT
first_line = None
last_lineno = -1
last_col = 0
tokgen = tokenize.generate_tokens(source.readline)
for toktype, ttext, (slineno, scol), (elineno, ecol), ltext in tokgen:
if 0: # Change to if 1 to see the tokens fly by.
print("%10s %-14s %-20r %r" % (
tokenize.tok_name.get(toktype, toktype),
"%d.%d-%d.%d" % (slineno, scol, elineno, ecol),
ttext, ltext
))
if slineno > last_lineno:
last_col = 0
if scol > last_col:
mod.write(" " * (scol - last_col))
if toktype == token.STRING and prev_toktype == token.INDENT:
# Docstring
mod.write("#--")
elif toktype == tokenize.COMMENT:
# Comment
mod.write("##\n")
else:
mod.write(ttext)
prev_toktype = toktype
last_col = ecol
last_lineno = elineno
if __name__ == '__main__':
do_file(sys.argv[1])
I'm leaving stub comments in the place of docstrings and comments since it simplifies the code. If you remove them completely, you also have to get rid of indentation before them.
我将用存根注释代替文档字符串和注释,因为它简化了代码。如果您完全删除它们,则还必须删除它们之前的缩进。
回答by Dan McDougall
I'm the author of the "mygod, he has written a python interpreter using regex..." (i.e. pyminifier) mentioned at that link below=).
I just wanted to chime in and say that I've improved the code quite a bit using the tokenizer module (which I discovered thanks to this question =) ).
我是“我的上帝,他使用正则表达式编写了一个python解释器......”(即pyminifier)的作者,在下面的链接中提到=)。
我只是想插嘴说我已经使用 tokenizer 模块对代码进行了相当多的改进(由于这个问题我发现了这个 =))。
You'll be happy to note that the code no longer relies so much on regular expressions and uses tokenizer to great effect. Anyway, here's the remove_comments_and_docstrings()
function from pyminifier
(Note: It works properly with the edge cases that previously-posted code breaks on):
您会很高兴地注意到,该代码不再过多依赖正则表达式,而是使用标记器产生了巨大的效果。无论如何,这是remove_comments_and_docstrings()
来自 pyminifier的函数
(注意:它可以在之前发布的代码中断的边缘情况下正常工作):
import cStringIO, tokenize
def remove_comments_and_docstrings(source):
"""
Returns 'source' minus comments and docstrings.
"""
io_obj = cStringIO.StringIO(source)
out = ""
prev_toktype = tokenize.INDENT
last_lineno = -1
last_col = 0
for tok in tokenize.generate_tokens(io_obj.readline):
token_type = tok[0]
token_string = tok[1]
start_line, start_col = tok[2]
end_line, end_col = tok[3]
ltext = tok[4]
# The following two conditionals preserve indentation.
# This is necessary because we're not using tokenize.untokenize()
# (because it spits out code with copious amounts of oddly-placed
# whitespace).
if start_line > last_lineno:
last_col = 0
if start_col > last_col:
out += (" " * (start_col - last_col))
# Remove comments:
if token_type == tokenize.COMMENT:
pass
# This series of conditionals removes docstrings:
elif token_type == tokenize.STRING:
if prev_toktype != tokenize.INDENT:
# This is likely a docstring; double-check we're not inside an operator:
if prev_toktype != tokenize.NEWLINE:
# Note regarding NEWLINE vs NL: The tokenize module
# differentiates between newlines that start a new statement
# and newlines inside of operators such as parens, brackes,
# and curly braces. Newlines inside of operators are
# NEWLINE and newlines that start new code are NL.
# Catch whole-module docstrings:
if start_col > 0:
# Unlabelled indentation means we're inside an operator
out += token_string
# Note regarding the INDENT token: The tokenize module does
# not label indentation inside of an operator (parens,
# brackets, and curly braces) as actual indentation.
# For example:
# def foo():
# "The spaces before this docstring are tokenize.INDENT"
# test = [
# "The spaces before this string do not get a token"
# ]
else:
out += token_string
prev_toktype = token_type
last_col = end_col
last_lineno = end_line
return out
回答by mavnn
回答by Denis Otkidach
Try testing each chunk of tokens ending with NEWLINE. Then correct pattern for docstring (including cases where it serves as comment, but isn't assigned to __doc__
) I believe is (assuming match is performed from start of file of after NEWLINE):
尝试测试以 NEWLINE 结尾的每个令牌块。然后文档字符串的正确模式(包括它作为注释的情况,但没有分配给__doc__
)我相信是(假设匹配是从 NEWLINE 之后的文件开始执行的):
( DEDENT+ | INDENT? ) STRING+ COMMENT? NEWLINE
This should handle all tricky cases: string concatenation, line continuation, module/class/function docstrings, comment in the sameline after string. Note, there is a difference between NL and NEWLINE tokens, so we don't need to worry about single string of the line inside expression.
这应该处理所有棘手的情况:字符串连接、行继续、模块/类/函数文档字符串、字符串后在同一行中的注释。请注意,NL 和 NEWLINE 标记之间存在差异,因此我们无需担心表达式中的行的单个字符串。
回答by p4r4noj4
I've just used the code given by Dan McDougall, and I've found two problems.
我刚刚使用了 Dan McDougall 给出的代码,我发现了两个问题。
- There were too many empty new lines, so I decided to remove line every time we have two consecutive new lines
- When the Python code was processed all spaces were missing (except indentation) and so such things as "import Anything" changed into "importAnything" which caused problems. I added spaces after and before reserved Python words which needed it done. I hope I didn't make any mistake there.
- 空的新行太多,所以我决定每次我们有两个连续的新行时删除行
- 在处理 Python 代码时,所有空格都丢失了(缩进除外),因此诸如“导入任何东西”之类的东西变成了“importAnything”,从而导致了问题。我在需要完成的保留 Python 单词前后添加了空格。我希望我没有在那里犯任何错误。
I think I have fixed both things with adding (before return) few more lines:
我想我已经通过添加(在返回之前)多行解决了这两件事:
# Removing unneeded newlines from string
buffered_content = cStringIO.StringIO(content) # Takes the string generated by Dan McDougall's code as input
content_without_newlines = ""
previous_token_type = tokenize.NEWLINE
for tokens in tokenize.generate_tokens(buffered_content.readline):
token_type = tokens[0]
token_string = tokens[1]
if previous_token_type == tokenize.NL and token_type == tokenize.NL:
pass
else:
# add necessary spaces
prev_space = ''
next_space = ''
if token_string in ['and', 'as', 'or', 'in', 'is']:
prev_space = ' '
if token_string in ['and', 'del', 'from', 'not', 'while', 'as', 'elif', 'global', 'or', 'with', 'assert', 'if', 'yield', 'except', 'import', 'print', 'class', 'exec', 'in', 'raise', 'is', 'return', 'def', 'for', 'lambda']:
next_space = ' '
content_without_newlines += prev_space + token_string + next_space # This will be our new output!
previous_token_type = token_type
回答by SurpriseDog
I found an easier way to do this with the ast and astunparse module (available from pip). It converts the code text into a syntax tree, and then the astunparse module prints the code back out again without the comments. I had to strip out the docstrings with a simple matching, but it seems to work. I've been looking through output and so far the only downside of this method is that it strips all newlines from your code.
我找到了一种更简单的方法来使用 ast 和 astunparse 模块(可从 pip 获得)。它将代码文本转换为语法树,然后 astunparse 模块再次打印出没有注释的代码。我不得不用简单的匹配去除文档字符串,但它似乎有效。我一直在查看输出,到目前为止,这种方法的唯一缺点是它从您的代码中删除了所有换行符。
import ast, astunparse
with open('my_module.py') as f:
lines = astunparse.unparse(ast.parse(f.read())).split('\n')
for line in lines:
if line.lstrip()[:1] not in ("'", '"'):
print(line)
回答by Basj
Here is a modification of Dan's solution to make it run for Python3 + also remove empty lines + make it ready-to-use:
这是对 Dan 的解决方案的修改,使其可用于 Python3 + 还删除空行 + 使其可供使用:
import io, tokenize, re
def remove_comments_and_docstrings(source):
io_obj = io.StringIO(source)
out = ""
prev_toktype = tokenize.INDENT
last_lineno = -1
last_col = 0
for tok in tokenize.generate_tokens(io_obj.readline):
token_type = tok[0]
token_string = tok[1]
start_line, start_col = tok[2]
end_line, end_col = tok[3]
ltext = tok[4]
if start_line > last_lineno:
last_col = 0
if start_col > last_col:
out += (" " * (start_col - last_col))
if token_type == tokenize.COMMENT:
pass
elif token_type == tokenize.STRING:
if prev_toktype != tokenize.INDENT:
if prev_toktype != tokenize.NEWLINE:
if start_col > 0:
out += token_string
else:
out += token_string
prev_toktype = token_type
last_col = end_col
last_lineno = end_line
out = '\n'.join(l for l in out.splitlines() if l.strip())
return out
with open('test.py', 'r') as f:
print(remove_comments_and_docstrings(f.read()))