Home > Mobile >  Why do generator expressions have lower stamina when compared to map objects in python 3, while recu
Why do generator expressions have lower stamina when compared to map objects in python 3, while recu

Time:11-21

For example, we can define the fibonacci sequence using recursion, in a generator:

from itertools import pairwise

def fibs():
    yield 0
    yield 1
    yield from map(sum, pairwise(fibs()))

for index, result in enumerate(fibs()):
    print(index, result)

If we try to run this, it gives up at an index of 1025, crushed under a recursion depth exceeded error. Bellow I copied the end of the output:

1024 4506699633677819813104383235728886049367860596218604830803023149600030645708721396248792609141030396244873266580345011219530209367425581019871067646094200262285202346655868899711089246778413354004103631553925405243
1025 7291993184377412737043195648396979558721167948342308637716205818587400148912186579874409368754354848994831816250311893410648104792440789475340471377366852420526027975140687031196633477605718294523235826853392138525
Traceback (most recent call last):
  File "/home/user/Área de Trabalho/fibs1.py", line 6, in fibs
    yield from map(sum, pairwise(fibs()))
  File "/home/user/Área de Trabalho/fibs1.py", line 6, in fibs
    yield from map(sum, pairwise(fibs()))
  File "/home/user/Área de Trabalho/fibs1.py", line 6, in fibs
    yield from map(sum, pairwise(fibs()))
  [Previous line repeated 997 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object

Now we can try to change it by replacing map with a generator expression, to make it more pythonic:

from itertools import pairwise

def fibs():
    yield 0
    yield 1
    yield from (sum(pair) for pair in pairwise(fibs()))

for index, result in enumerate(fibs()):
    print(index, result)

Now the problem is, this version run just half as long as the first one, imploding at a depth of only 513 with a much larger error message, when it also returns a maximum recursion depth exceeded error:

512 44893845313309942978077298160660626646181883623886239791269694466661322268805744081870933775586567858979269
513 72639767602615659833415769076743441675530755653534070653184546540063470576806357692953027861477736726533858
Traceback (most recent call last):
  File "/home/user/Área de Trabalho/fibs2.py", line 6, in <genexpr>
    yield from (sum(pair) for pair in pairwise(fibs()))
  File "/home/user/Área de Trabalho/fibs2.py", line 6, in fibs
    (... this repeats a lot, exceeds maximum character limit in SO)
    yield from (sum(pair) for pair in pairwise(fibs()))
  File "/home/user/Área de Trabalho/fibs2.py", line 6, in fibs
    yield from (sum(pair) for pair in pairwise(fibs()))
RecursionError: maximum recursion depth exceeded while calling a Python object

So I ask, why the difference?

I noticed this issue while playing with a haskell-like implementation of fibonacci Joel Grus published in his blog, but changing it to make use of pairwise introduced in the itertools module when python 3.10 was released. This way a single import is necessary (pairwise), instead of three imports (add, islice, tee) and one helper function definition (tail).

CodePudding user response:

The "stamina" you're referring to relates to both lazy evaluation and to tail recursion with a configured max stack depth.

The biggest difference between Common Lisp and Scheme is the latter absolutely requires proper tail recursion support. Lisp code might be able to loop forever, but it's guaranteed for scheme code that fits that format, with no growth of the stack.

In a python itertools context, you are essentially complaining that that wasn't part of the spec. And also, that expressing the idea one way or another might push a single stack frame or a pair of them, with each iteration. Ok, fair enough. The tools you're testing weren't written to conform to that requirement, and when you push on that aspect, they may come up wanting. When engineering a solution you might consider patching things up with a small lru_cache, but here I understand it's just an academic exercise, trying to explore the limits, and you have found them.

The code is doing what you asked of it. Yes, there is room to design better code. Go for it!

  • Related