18

I have a generator that is roughly as follows:

def gen1():
    for x, y in enumerate(xrange(20)):
        a = 5*x
        b = 10*y
        yield a, b

From this generator, I would like to create 2 separate generators as follows:

for a in gen1_split_a():
    yield a

for b in gen1_split_b():
    yield b

What's my play, SA?

DSM
  • 342,061
  • 65
  • 592
  • 494
cavaunpeu
  • 614
  • 1
  • 6
  • 16
  • 4
    You could do `for a, b in gen1():`, but if you want all the `a`s then all the `b`s I'd suggest a complete restructure; there's no point using a generator, as you need to get all pairs up-front. – jonrsharpe Jan 19 '15 at 17:16
  • You can just run the generator twice, each time only using one of the two elements. – interjay Jan 19 '15 at 17:21
  • why not just do `gen = gen1()` and `def gen_bs(gen): for b in gen: yield b[1]`?, then `gen_bs(gen)`...? – Rick Jan 19 '15 at 17:28

2 Answers2

15

You can't, not without ending up holding all generator output just to be able to produce b values in the second loop. That can get costly in terms of memory.

You'd use itertools.tee() to 'duplicate' the generator:

from itertools import tee

def split_gen(gen):
    gen_a, gen_b = tee(gen, 2)
    return (a for a, b in gen_a), (b for a, b in gen_b)

gen1_split_a, gen1_split_b = split_gen(gen1)

for a in gen1_split_a:
    print a

for b in gen1_split_b:
    print b

but what happens in this case is that the tee object will end up having to store everything gen1 produces. From the documentation:

This itertool may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator uses most or all of the data before another iterator starts, it is faster to use list() instead of tee().

Following that advice, just put the b values into a list for the second loop:

b_values = []
for a, b in gen1():
    print a
    b_values.append(a)

for b in b_values:
    print b

or better yet, just process both a and b in the one loop.

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
  • Thanks Martijn! This former did the trick. That said, I did some refactoring and omitted this from the code entirely. Saved in my notes for the next time, though! – cavaunpeu Jan 20 '15 at 16:29
2

I have a solution that might not exactly be what you want. It separates a n-tuple generator into a tuple of n individual generators. It requires, however, that each individual value of the current tuple has been returned to proceed to the next tuple. Strictly speaking, it "splits" a n-tuple generator into n generators but your example won't work as presented.

It exploits Python's ability to send values back into a generator to influence future yields. The same idea should also be implementable with classes instead but I wanted to get to grips with generators anyway.

When the new generators are initialized, they only know the current n-tuple. Every time they yield the value at their respective index, a callback is performed that informs a higher level generator of this index. Once all indices of the current tuple have been yielded, the higher level generator moves on to the next tuple and the process repeats.

It may be a bit unwieldy, but here is the code (Python 3.6).

from typing import TypeVar, Generator, Tuple, Iterator, Optional

TYPE_A = TypeVar("TYPE_A")


def _next_value(source: Iterator[Tuple[TYPE_A, ...]], size: int) -> Generator[Tuple[TYPE_A, ...], Optional[int], None]:
    checked = [False for _ in range(size)]
    value = next(source)
    while True:
        index = yield value
        if all(checked):
            value = next(source)
            for _i in range(len(checked)):
                checked[_i] = False
        checked[index] = True


def _sub_iterator(index: int, callback: Generator[Tuple[TYPE_A, ...], int, None]) -> Generator[TYPE_A, None, None]:
    while True:
        value = callback.send(index)
        yield value[index]


def split_iterator(source: Iterator[Tuple[TYPE_A, ...]], size: int) -> Tuple[Generator[TYPE_A, Optional[TYPE_A], None], ...]:
    generators = []

    _cb = _next_value(source, size)
    _cb.send(None)

    for _i in range(size):
        each_generator = _sub_iterator(_i, _cb)
        generators.append(each_generator)

    return tuple(generators)


if __name__ == "__main__":
    def triple():
        _i = 0
        while True:
            yield tuple(range(_i, _i + 3))
            _i += 1

    g = triple()
    for i, each_value in enumerate(g):
        if i >= 5:
            break
        print(each_value)

    print()

    g = triple()
    a_gen, b_gen, c_gen = split_iterator(g, 3)
    for i, (a_value, b_value, c_value) in enumerate(zip(a_gen, b_gen, c_gen)):
        if i >= 5:
            break
        print((a_value, b_value, c_value))

triple() is a 3-tuple generator and split_iterator() produces three generators, each of which yields one index from the tuples yielded by triple(). Each individual _sub_iterator progresses only once all values from the current tuple have been yielded.

wehnsdaefflae
  • 826
  • 2
  • 12
  • 27