将“yield from”语句转换为 Python 2.7 代码

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/17581332/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 08:36:04  来源:igfitidea点击:

Converting "yield from" statement to Python 2.7 code

pythongeneratorpython-2.xyieldyield-from

提问by vkaul11

I had a code below in Python 3.2 and I wanted to run it in Python 2.7. I did convert it (have put the code of missing_elementsin both versions) but I am not sure if that is the most efficient way to do it. Basically what happens if there are two yield fromcalls like below in upper half and lower half in missing_elementfunction? Are the entries from the two halves (upper and lower) appended to each other in one list so that the parent recursion function with the yield fromcall and use both the halves together?

我在 Python 3.2 下面有一个代码,我想在 Python 2.7 中运行它。我确实转换了它(missing_elements在两个版本中都放了代码),但我不确定这是否是最有效的方法。基本上如果yield frommissing_element函数的上半部分和下半部分有两个像下面这样的调用会发生什么?两半(上半部分和下半部分)的条目是否在一个列表中相互附加,以便父递归函数与yield from调用一起使用两半?

def missing_elements(L, start, end):  # Python 3.2
    if end - start <= 1: 
        if L[end] - L[start] > 1:
            yield from range(L[start] + 1, L[end])
        return

index = start + (end - start) // 2

# is the lower half consecutive?
consecutive_low =  L[index] == L[start] + (index - start)
if not consecutive_low:
    yield from missing_elements(L, start, index)

# is the upper part consecutive?
consecutive_high =  L[index] == L[end] - (end - index)
if not consecutive_high:
    yield from missing_elements(L, index, end)

def main():
    L = [10, 11, 13, 14, 15, 16, 17, 18, 20]
    print(list(missing_elements(L, 0, len(L)-1)))
    L = range(10, 21)
    print(list(missing_elements(L, 0, len(L)-1)))

def missing_elements(L, start, end):  # Python 2.7
    return_list = []                
    if end - start <= 1: 
        if L[end] - L[start] > 1:
            return range(L[start] + 1, L[end])

    index = start + (end - start) // 2

    # is the lower half consecutive?
    consecutive_low =  L[index] == L[start] + (index - start)
    if not consecutive_low:
        return_list.append(missing_elements(L, start, index))

    # is the upper part consecutive?
    consecutive_high =  L[index] == L[end] - (end - index)
    if not consecutive_high:
        return_list.append(missing_elements(L, index, end))
    return return_list

采纳答案by abarnert

If you don't use the results of your yields,* you can alwaysturn this:

如果您不使用收益的结果,*您可以随时将其转换为:

yield from foo

… into this:

...进入这个:

for bar in foo:
    yield bar

There might be a performance cost,** but there is never a semantic difference.

可能会有性能成本,** 但从来没有语义差异。



Are the entries from the two halves (upper and lower) appended to each other in one list so that the parent recursion function with the yield from call and use both the halves together?

两半(上半部分和下半部分)的条目是否在一个列表中相互附加,以便父递归函数与调用的 yield 一起使用两半?

No! The whole point of iterators and generators is that you don't build actual lists and append them together.

不!迭代器和生成器的全部意义在于您不构建实际列表并将它们附加在一起。

But the effectis similar: you just yield from one, then yield from another.

效果是相似的:你只是从一个让步,然后从另一个让步。

If you think of the upper half and the lower half as "lazy lists", then yes, you can think of this as a "lazy append" that creates a larger "lazy list". And if you call liston the result of the parent function, you of course willget an actual listthat's equivalent to appending together the two lists you would have gotten if you'd done yield list(…)instead of yield from ….

如果您将上半部分和下半部分视为“懒惰列表”,那么是的,您可以将其视为创建更大“懒惰列表”的“懒惰追加”。如果您调用list父函数的结果,您当然得到一个实际值list,相当于将如果您使用 doneyield list(…)而不是yield from ….

But I think it's easier to think of it the other way around: What it does is exactly the same the forloops do.

但我认为反过来想会更容易:它所做的与for循环完全相同。

If you saved the two iterators into variables, and looped over itertools.chain(upper, lower), that would be the same as looping over the first and then looping over the second, right? No difference here. In fact, you could implement chainas just:

如果您将两个迭代器保存到变量中,然后循环itertools.chain(upper, lower),那将与循环第一个然后循环第二个相同,对吗?这里没有区别。事实上,你可以实现chain如下:

for arg in *args:
    yield from arg


* Not the values the generator yields to its caller, the value of the yield expressions themselves, within the generator (which come from the caller using the sendmethod), as described in PEP 342. You're not using these in your examples. And I'm willing to bet you're not in your real code. But coroutine-style code often uses the value of a yield fromexpression—see PEP 3156for examples. Such code usually depends on other features of Python 3.3 generators—in particular, the new StopIteration.valuefrom the same PEP 380that introduced yield from—so it will have to be rewritten. But if not, you can use the PEP also shows you the complete horrid messy equivalent, and you can of course pare down the parts you don't care about. And if you don't use the value of the expression, it pares down to the two lines above.

* 不是生成器生成给调用者的值,生成器中生成表达式本身的值(来自使用该send方法的调用者),如PEP 342 中所述。您没有在示例中使用这些。我敢打赌你不在你的真实代码中。但是协程风格的代码通常使用yield from表达式的值——示例参见PEP 3156。这样的代码通常取决于3.3发电机-特别是新的Python的其他特征StopIteration.value由相同的PEP 380所引入yield from——所以必须改写。但如果没有,您可以使用 PEP 也向您展示完整的可怕的凌乱等价物,您当然可以削减您不关心的部分。如果您不使用表达式的值,它会缩减为上面的两行。

** Not a huge one, and there's nothing you can do about it short of using Python 3.3 or completely restructuring your code. It's exactly the same case as translating list comprehensions to Python 1.5 loops, or any other case when there's a new optimization in version X.Y and you need to use an older version.

** 不是很大,除了使用 Python 3.3 或完全重组您的代码之外,您无能为力。这与将列表推导式转换为 Python 1.5 循环完全相同,或者在 XY 版本中有新优化并且您需要使用旧版本时的任何其他情况。

回答by ovgolovin

Replace them with for-loops:

用 for 循环替换它们:

yield from range(L[start] + 1, L[end])

==>

for i in range(L[start] + 1, L[end]):
    yield i

The same about elements:

元素也一样:

yield from missing_elements(L, index, end)

==>

for el in missing_elements(L, index, end):
    yield el

回答by julx

I think I found a way to emulate Python 3.x yield fromconstruct in Python 2.x. It's not efficient and it is a little hacky, but here it is:

我想我找到了一种yield from在 Python 2.x 中模拟 Python 3.x构造的方法。它效率不高,而且有点老套,但它是:

import types

def inline_generators(fn):
    def inline(value):
        if isinstance(value, InlineGenerator):
            for x in value.wrapped:
                for y in inline(x):
                    yield y
        else:
            yield value
    def wrapped(*args, **kwargs):
        result = fn(*args, **kwargs)
        if isinstance(result, types.GeneratorType):
            result = inline(_from(result))
        return result
    return wrapped

class InlineGenerator(object):
    def __init__(self, wrapped):
        self.wrapped = wrapped

def _from(value):
    assert isinstance(value, types.GeneratorType)
    return InlineGenerator(value)

Usage:

用法:

@inline_generators
def outer(x):
    def inner_inner(x):
        for x in range(1, x + 1):
            yield x
    def inner(x):
        for x in range(1, x + 1):
            yield _from(inner_inner(x))
    for x in range(1, x + 1):
        yield _from(inner(x))

for x in outer(3):
    print x,

Produces output:

产生输出:

1 1 1 2 1 1 2 1 2 3

Maybe someone finds this helpful.

也许有人觉得这很有帮助。

Known issues:Lacks support for send() and various corner cases described in PEP 380. These could be added and I will edit my entry once I get it working.

已知问题:缺乏对 PEP 380 中描述的 send() 和各种极端情况的支持。可以添加这些,一旦我的条目工作正常,我将对其进行编辑。

回答by alkalinity

I've found using resource contexts (using the python-resourcesmodule) to be an elegant mechanism for implementing subgenerators in Python 2.7. Conveniently I'd already been using the resource contexts anyway.

我发现使用资源上下文(使用python-resources模块)是一种在 Python 2.7 中实现子生成器的优雅机制。无论如何,我已经很方便地使用了资源上下文。

If in Python 3.3 you would have:

如果在 Python 3.3 中,您将拥有:

@resources.register_func
def get_a_thing(type_of_thing):
    if type_of_thing is "A":
        yield from complicated_logic_for_handling_a()
    else:
        yield from complicated_logic_for_handling_b()

def complicated_logic_for_handling_a():
    a = expensive_setup_for_a()
    yield a
    expensive_tear_down_for_a()

def complicated_logic_for_handling_b():
    b = expensive_setup_for_b()
    yield b
    expensive_tear_down_for_b()

In Python 2.7 you would have:

在 Python 2.7 中,您将拥有:

@resources.register_func
def get_a_thing(type_of_thing):
    if type_of_thing is "A":
        with resources.complicated_logic_for_handling_a_ctx() as a:
            yield a
    else:
        with resources.complicated_logic_for_handling_b_ctx() as b:
            yield b

@resources.register_func
def complicated_logic_for_handling_a():
    a = expensive_setup_for_a()
    yield a
    expensive_tear_down_for_a()

@resources.register_func
def complicated_logic_for_handling_b():
    b = expensive_setup_for_b()
    yield b
    expensive_tear_down_for_b()

Note how the complicated-logic operations only require the registration as a resource.

注意复杂逻辑操作如何只需要注册为资源。

回答by Tadhg McDonald-Jensen

I just came across this issue and my usage was a bit more difficult since I needed the return valueof yield from:

我只是碰到这个问题,我使用的是一个有点困难,因为我需要返回值yield from

result = yield from other_gen()

This cannot be represented as a simple forloop but can be reproduced with this:

这不能表示为一个简单的for循环,但可以用这个重现:

_iter = iter(other_gen())
try:
    while True: #broken by StopIteration
        yield next(_iter)
except StopIteration as e:
    if e.args:
        result = e.args[0]
    else:
        result = None

Hopefully this will help people who come across the same problem. :)

希望这会帮助遇到同样问题的人。:)

回答by Cliff Hill

What about using the definition from pep-380in order to construct a Python 2 syntax version:

如何使用pep-380中的定义来构建 Python 2 语法版本:

The statement:

该声明:

RESULT = yield from EXPR

is semantically equivalent to:

在语义上等同于:

_i = iter(EXPR)
try:
    _y = next(_i)
except StopIteration as _e:
    _r = _e.value
else:
    while 1:
        try:
            _s = yield _y
        except GeneratorExit as _e:
            try:
                _m = _i.close
            except AttributeError:
                pass
            else:
                _m()
            raise _e
        except BaseException as _e:
            _x = sys.exc_info()
            try:
                _m = _i.throw
            except AttributeError:
                raise _e
            else:
                try:
                    _y = _m(*_x)
                except StopIteration as _e:
                    _r = _e.value
                    break
        else:
            try:
                if _s is None:
                    _y = next(_i)
                else:
                    _y = _i.send(_s)
            except StopIteration as _e:
                _r = _e.value
                break
RESULT = _r

In a generator, the statement:

在生成器中,语句:

return value

is semantically equivalent to

在语义上等同于

raise StopIteration(value)

except that, as currently, the exception cannot be caught by exceptclauses within the returning generator.

除了目前的异常不能被except返回生成器中的子句捕获。

The StopIteration exception behaves as though defined thusly:

StopIteration 异常的行为就像这样定义的:

class StopIteration(Exception):

    def __init__(self, *args):
        if len(args) > 0:
            self.value = args[0]
        else:
            self.value = None
        Exception.__init__(self, *args)