I tried multiple ranges for this example I let above:
import time
def reader():
for a in range(100000000):
yield a
def reader_wrapper(gen):
for i in gen:
yield i
def reader_wrapper_enhanced(gen):
yield from gen
wrap = reader_wrapper_enhanced(reader())
start = time.perf_counter()
for i in wrap:
...
print("LAST: %s " % (time.perf_counter() - start))
wrap = reader_wrapper(reader())
start = time.perf_counter()
for i in wrap:
...
print("LAST: %s " % (time.perf_counter() - start))
My main question is that yield from is actually faster than a normal yield using loop.
Result for ranges:
Note: First result is the yield from and the second the yield.
RANGE | Yield from | yield loop |
---|---|---|
100 | 1.4644000000001156e-05 | 1.3087000000000168e-05 |
100000 | 0.010678924 | 0.012484127000000005 |
100000000 | 7.763497913 | 8.586706118000002 |
10000000000 | 794.1722820499999 | 807.1722820400000 |
In case it is faster, should not we always just use it in this kind of cases?
CodePudding user response:
Yes, it's faster when the inputs are long enough (though not by much, as you've seen), and yes, you may as well let Python do the work of yielding from the delegate iterator by default.
The one time you wouldn't want to do this is when you are delegating to a generator that you don't want to receive values from the caller sent with .send
or .throw
; when using plain yield
, your generator receives them, when using yield from
, the delegate generator receives them (usually the latter is what you want, and it's the primary reason yield from
exists in the first place).