Option 1
As long as this is pairs we're talking about, let's try a list comprehension:
flatPairs = [
[x, y] if i % 2 == 0 else [y, x] for i, (x, y) in enumerate(
zip(pleatedTuple[::2], pleatedTuple[1::2])
)
]
You can also build this from scratch using a loop:
flatPairs = []
for i, (x, y) in enumerate(zip(pleatedTuple[::2], pleatedTuple[1::2])):
if i % 2 == 0:
flatPairs.append([x, y])
else:
flatPairs.append([y, x])
print(flatPairs)
[[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
Option 2
Use Ned Batchelder's chunking subroutine chunks
and flip every alternate sublist:
# https://stackoverflow.com/a/312464/4909087
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
Call chunks
and exhaust the returned generator to get a list of pairs:
flatPairs = list(chunks(pleatedTuple, n=2))
Now, reverse every other pair with a loop.
for i in range(1, len(flatPairs), 2):
flatPairs[i] = flatPairs[i][::-1]
print(flatPairs)
[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]
Note that in this case, the result is a list of tuples.
Performance
(of my answers only)
I'm interested in performance, so I've decided to time my answers:
# Setup
pleatedTuple = tuple(range(100000))
# List comp
21.1 ms ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Loop
20.8 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# chunks
26 ms ± 2.19 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
For more performance, you may replace the chunks
generator with a more performant alternative:
flatPairs = list(zip(pleatedTuple[::2], pleatedTuple[1::2]))
And then reverse with a loop as required. This brings the time down considerably:
13.1 ms ± 994 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
A 2x speedup, phew! Beware though, this isn't nearly as memory efficient as the generator would be...