Home > Net >  Is list=False parameter planned for future zip() versions (new in Python 3.10 strict=False)?
Is list=False parameter planned for future zip() versions (new in Python 3.10 strict=False)?

Time:09-30

Since Python version 3.10 the builtin function zip() allows usage of the parameter strict with the default value of False providing the option of setting strict=True which rises a TypeError in case the iterables don't have equal lengths (helps with debugging).

Is there also an option list=False in the pipeline which if set to list=True returns a list of lists instead of a list of tuples? If not: is there a good reason why not?


From the comments:


Sounds like you want map(list, zip(...)) - khelwood

This will go twice over the full length of the zipped items and an own zip() written in Python will slow things down. Another option is [list(e) for e in zip(...)] with the same disadvantage. What I want is a list=True option for zip and wonder why isn't it there? Probably there is a good reason for that I am not aware of. Packing in tupels first and repacking it afterwards into lists makes not much sense. Tuples are more restricted than lists cutting down the number of possible direct operations on the result. In my eyes more Pythonic were to return a list of lists which gives more flexibility - the reason why I wonder and ask the question.

Generally you use zip to iterate over pairs like for x, y in zip(...)

Yes, but another use of zip() is for transposing a 2D list of lists array (provided e.g. in numpy with .T) and want as result a list of lists and not a list of tuples in order to be able to operate on the array elements.

CodePudding user response:

There is no such plan. The benefit is trivial, and would slow down all uses of zip a little bit to enable a use case that isn't needed 99 % of the time.

In the (likely less than) 1% of the cases where it's needed, it can be achieved trivially with map(list, zip(...)), which has surprisingly low overhead, as:

  1. map is still lazy, so you're not producing any large intermediate data structures, and it's also implemented in C, so you don't pop in and out of the bytecode interpreter layer as you go (each new item involves a single call down into the C layer which does all the work to produce the next item "atomically"),
  2. Conversion from known sized tuple to list is incredibly cheap (creating or extending a list from an existing list or tuple is special-cased to be as performant as possible since it's such a common case), and
  3. On the CPython reference interpreter, zip itself has an optimization that recognizes when the tuple it produces is not referenced elsewhere when the next one is requested and reuses that tuple for the next output (so in fact, you only produce one truly new tuple, and otherwise only generate new lists; the optimization for tuples would likely be useless for lists since if you wanted a list, you probably intended to modify it and therefore it couldn't be reused reliably even if you dropped all references to it before the next loop). The overhead of having it produce a tuple first becomes trivial enough due to this optimization (which also benefits cases like for x, y in zip(it1, it2):) that the incremental benefit of directly producing a list is not enough to justify the change.

The optimization from item #3 in particular is a strict reason not to do this; to preserve the optimization for tuples, the code for zip's tp_next (the C equivalent of the __next__ that iterators have to produce the next value) would get more complicated, and expanding a cheap function called many times even a little can have big impacts on performance (if nothing else, it introduces at least one additional test-and-branch based on the zip "mode", and adds specialized code needed to work efficiently with a list (on top of the existing code for tuples) at the C layer. This might not seem like much; it might only add a nanosecond to the overhead per item in tuple mode, but given how often the method is called, and how this would slow the common (tuple) case to support the uncommon case (list), it's hard to justify.

For the record, the true cost of map(list, ...) wrapping seems to be multiplying overhead by approximately 3x for the simple case of ziping two sequences (timings from CPython 3.10.5 on Linux x86-64, using IPython 8.4.0 %%timeit magic to simplify microbenchmarks):

>>> %%timeit a = tuple(range(1000)); b = tuple(range(1000, 2000))
... for tup in zip(a, b):
...     pass
15.7 μs ± 216 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

>>> %%timeit a = tuple(range(1000)); b = tuple(range(1000, 2000))
... for lst in map(list, zip(a, b)):
...     pass
47 μs ± 1.09 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

All things considered, that's pretty good, ~30 ns of overhead per item to get lists instead of tuple. For comparison, the cost of doing just one meaningful thing with said list, say, calling lst.append(1) as the body of the loop, adds ~45 ns of cost, so basically anything you'd do with said list would cost more than the incremental expense of getting lists in the first place. It's just not that big a deal for something that comes up so rarely. If supporting list=True added just 1 ns of overhead to the tuple case, you'd need to prove that the list=True case would occur at least one out of every 30 cases (more like one out of ten or so cases in practice, since zip would run even slower for the list case since lists require two allocations, not one, and aren't quite as heavily optimized for repeated allocation of small fixed size lists as tuples are), and I guarantee you the ratio is nowhere close to that in real world code.

  • Related