682

I have a Python script which takes as input a list of integers, which I need to work with four integers at a time. Unfortunately, I don't have control of the input, or I'd have it passed in as a list of four-element tuples. Currently, I'm iterating over it this way:

for i in range(0, len(ints), 4):
    # dummy op for example code
    foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3]

It looks a lot like "C-think", though, which makes me suspect there's a more pythonic way of dealing with this situation. The list is discarded after iterating, so it needn't be preserved. Perhaps something like this would be better?

while ints:
    foo += ints[0] * ints[1] + ints[2] * ints[3]
    ints[0:4] = []

Still doesn't quite "feel" right, though. :-/

Related question: How do you split a list into evenly sized chunks in Python?

Trenton McKinney
  • 56,955
  • 33
  • 144
  • 158
Ben Blank
  • 54,908
  • 28
  • 127
  • 156
  • 4
    Your code does not work if the list size is not a multiple of four. – Pedro Henriques Jan 12 '09 at 03:03
  • 6
    I'm extend()ing the list so that it's length is a multiple of four before it gets this far. – Ben Blank Jan 12 '09 at 03:44
  • 6
    @ΤΖΩΤΖΙΟΥ — The questions are very similar, but not quite duplicate. It's "split into any number of chunks of size N" vs. "split into N chunks of any size". :-) – Ben Blank Jul 21 '11 at 18:16
  • 3
    possible duplicate of [How do you split a list into evenly sized chunks in Python?](http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python) – dbr Jun 23 '12 at 15:23
  • 1
    Does this answer your question? [How do you split a list into evenly sized chunks?](https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks) – mkrieger1 Apr 10 '22 at 09:58

40 Answers40

601
def chunker(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

Works with any sequence:

text = "I am a very, very helpful text"

for group in chunker(text, 7):
   print(repr(group),)
# 'I am a ' 'very, v' 'ery hel' 'pful te' 'xt'

print('|'.join(chunker(text, 10)))
# I am a ver|y, very he|lpful text

animals = ['cat', 'dog', 'rabbit', 'duck', 'bird', 'cow', 'gnu', 'fish']

for group in chunker(animals, 3):
    print(group)
# ['cat', 'dog', 'rabbit']
# ['duck', 'bird', 'cow']
# ['gnu', 'fish']
Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
nosklo
  • 217,122
  • 57
  • 293
  • 297
  • 21
    @Carlos Crasborn's version works for any iterable (not just sequences as the above code); it is concise and probably just as fast or even faster. Though it might be a bit obscure (unclear) for people unfamiliar with `itertools` module. – jfs Jan 12 '09 at 14:39
  • @J.F. Sebastian — Now that I've gotten the chance to figure out *why* his code works, I feel compelled to change my accepted answer (which I *hate* doing). I love this answer, too, @nosklo, but that izip_longest trick seems tailor-made for my situation. – Ben Blank Jan 12 '09 at 22:03
  • I was having trouble using this, but it started working when I replaced the outside parens with square brackets. Is the syntax in the answer Python 3 only? – RoboCop87 Jul 03 '14 at 15:41
  • 7
    Note that `chunker` returns a `generator`. Replace the return to: `return [...]` to get a list. – Dror Feb 24 '15 at 08:59
  • 15
    Instead of writing a function building and then returning a generator, you could also write a generator directly, using `yield`: `for pos in xrange(0, len(seq), size): yield seq[pos:pos + size]`. I'm not sure if internally this would be handled any differently in any relevant aspect, but it might be even a tiny bit clearer. – Alfe Apr 15 '16 at 10:22
  • 5
    Note this works only for sequences that support items access by index and won't work for generic iterators, because they may not support `__getitem__` method. – apollov Dec 22 '17 at 18:17
  • I rewrote this as a generator. Could you please do that? It has all of the upside, none of the downside, and is more memory-efficient. – smci May 25 '19 at 06:47
  • 1
    @smci the `chunker()` function above **is a generator** - it returns a generator expression – nosklo May 25 '19 at 11:11
  • @nosklo: Ah ok, I rewrote it as a simple generator: `for i in range(0, len(seq), size): yield seq[i:i + size]` which seemed to me to be simpler. – smci May 25 '19 at 11:12
  • Note that `size` is the length of each split, not the size/number of splits/groups. – flow2k Jul 26 '21 at 05:39
  • I really feel like this should be a convenience method for all ordered containers. I'm surprised it's not. – otocan Feb 16 '23 at 14:58
437

Modified from the Recipes section of Python's itertools docs:

from itertools import zip_longest

def grouper(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

Example

grouper('ABCDEFGHIJ', 3, 'x')  # --> 'ABC' 'DEF' 'GHI' 'Jxx'

Note: on Python 2 use izip_longest instead of zip_longest.

Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
Craz
  • 8,193
  • 2
  • 23
  • 16
  • 81
    Finally got a chance to play around with this in a python session. For those who are as confused as I was, this is feeding the same iterator to izip_longest multiple times, causing it to consume successive values of the same sequence rather than striped values from separate sequences. I love it! – Ben Blank Jan 12 '09 at 22:00
  • 8
    What's the best way to filter back out the fillvalue? ([item for item in items if item is not fillvalue] for items in grouper(iterable))? – gotgenes Aug 26 '09 at 22:48
  • 3
    I am not sure if this is the most pythonic answer but it possibly is the best use of `[LIST]*n` structure. – Utku Zihnioglu Feb 15 '11 at 00:01
  • 4
    This works, but it seems interpreter implementation-dependent. Does the itertools.izip_longest specification actually guarantee a striped access order for the iterators (e.g., with 3 iterators A,B, and C, the access ordering will be A,B,C,A,B,C,A,Fill,C and not something like A,A,B,B,C,C,A,Fill,C or A,B,C,C,B,A,A,Fill,C? I could see the latter orderings being useful for cache-line performance optimization. If the single-striping access ordering is not guaranteed, this isn't a theoretically safe solution (although speaking practically, most implementations will single-step the iterators). – David B. Jul 23 '11 at 05:59
  • You can combine this all into a short one-liner: `zip(*[iter(yourList)]*n)` (or `izip_longest` with fillvalue) – ninjagecko Apr 28 '12 at 14:55
  • 24
    I suspect that the performance of this grouper recipe for 256k sized chunks will be very poor, because `izip_longest` will be fed 256k arguments. – anatoly techtonik Apr 28 '13 at 15:07
  • 1
    What is an `izip_longest` object, why doesn't it behave like a list, and why is it returned from this? Why must I call `list()` on it, why doesn't it just return a new list? – davidgoli Nov 26 '13 at 02:31
  • 3
    @DavidB.: the recipe is given as the example code in the officical documentation. Unless it is a bug; the behaviour is guaranteed – jfs Apr 23 '14 at 21:26
  • To be efficient with large `n` you will need to manage a pool and feed it with `islice` like here https://github.com/Suor/funcy/blob/1.0.0/funcy/seqs.py#L293 – Suor Jun 04 '14 at 20:08
  • 17
    In several places commenters say "when I finally worked out how this worked...." Maybe a bit of explanation is required. Particularly the list of iterators aspect. – LondonRob Aug 14 '15 at 07:00
  • I prefer putting the arguments in order of the size first followed by the sequence. This makes it easy to create partials for chunking by a certain amount, and then just passing different sequences to them. – PaulMcG Oct 17 '15 at 14:53
  • 1
    @anatolytechtonik performance is judged by measurements on real data on real hardware, not hypothetical "logic". Over the years, I've used the grouper() recipe many times. I've encountered cases when you cannot use slicing because the input is not a sequence. I don't remember a single case when I would replace the grouper()-like code with the chunker()-like code due to performance concerns (I might replace it for readability in the code for beginners). YMMV – jfs Dec 02 '16 at 23:37
  • @gotgenes, `(filter(None, chunk) for chunk in zip_longest(*[iter(yourList)]*n)` will provide a chunk generator. Each chunk is itself a generator (using filter) which will skip the fill values. – flutefreak7 Apr 01 '18 at 07:40
  • if next() for exhausted iterator happens to be slow (eg. psycopg2 cursor), the last chunk of `zip_longest` is on average `n/2` times slower than it should be. – Valentas Oct 08 '18 at 09:48
  • 13
    Is there a way to use this but without the `None` filling up the last chunk? – CMCDragonkai Dec 11 '18 at 05:31
  • 1
    This is less mem intensive/faster than it looks, presumably because the number of arguments and iterable are all pointers to the same object(s). – Gringo Suave Feb 12 '20 at 20:45
  • 2
    @CMCDragonkai, you can filter the iterator with `(x for x in chunk if x)`, asuming your fillvalue is `None`. – xbello Apr 30 '20 at 12:33
  • I would not recommend this since it is based on the implementation details of `zip_longest`. Even if its behavior is guaranteed, this recipe still requires knowledge beyond the API to be read (and understood). However interesting as a Python exercise. – emazep Feb 07 '21 at 04:45
  • Rephrasing @BenBlank's comment for more clarity: "This is feeding the list of same iterator of length=n to `zip_longest`. So internally next is called on each iterator element of the passed list. When iterator is exhausted(i.e. string size is not a multiple of n) filler value is pushed for the last element of the result tuple" – Amit Tripathi May 22 '21 at 20:09
  • 1
    @gotgenes `yield from ([n for n in t if n is not fillvalue] for t in zip_longest(*args, fillvalue = fillvalue))`, setting sentinel `fillvalue=object()`: `[[0, 1], [2, 3], [4]]`. Using an object sentinel allows `None` to be a valid element of the list outside fillvalue. – Jean Monet Feb 19 '22 at 21:18
  • @anatolytechtonik it's not as bad as you might expect. Python is designed to handle large numbers of arguments to functions. On my system, `python -m timeit "(lambda *args: None)(*range(2**18))"` runs smoothly and reports about `11.6 msec per loop`, compared to `7.28 msec per loop` for just building a list with `list(range(2**18))`. – Karl Knechtel Aug 12 '22 at 02:05
218
chunk_size = 4
for i in range(0, len(ints), chunk_size):
    chunk = ints[i:i+chunk_size]
    # process chunk of size <= chunk_size
Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
S.Lott
  • 384,516
  • 81
  • 508
  • 779
  • 1
    How does it behave if len(ints) is not a multiple of the chunkSize? – PlsWork Feb 17 '19 at 23:15
  • 9
    @AnnaVopureta `chunk` will have 1, 2 or 3 elements for the last batch of elements. See this question about why [slice indices can be out of bounds](https://stackoverflow.com/questions/9490058/why-substring-slicing-index-out-of-range-works-in-python). – Boris Verkhovskiy Mar 25 '19 at 18:56
  • Upvoted the solution that doesn't rely on itertools. It's nice to have a solution that works with Python out of the box. – Lou Dec 03 '22 at 15:49
33

Since Python 3.8 you can use the walrus := operator and itertools.islice.

from itertools import islice

list_ = [i for i in range(10, 100)]

def chunker(it, size):
    iterator = iter(it)
    while chunk := list(islice(iterator, size)):
        print(chunk)
In [2]: chunker(list_, 10)                                                         
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69]
[70, 71, 72, 73, 74, 75, 76, 77, 78, 79]
[80, 81, 82, 83, 84, 85, 86, 87, 88, 89]
[90, 91, 92, 93, 94, 95, 96, 97, 98, 99]

kafran
  • 729
  • 7
  • 13
31
import itertools
def chunks(iterable,size):
    it = iter(iterable)
    chunk = tuple(itertools.islice(it,size))
    while chunk:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

# though this will throw ValueError if the length of ints
# isn't a multiple of four:
for x1,x2,x3,x4 in chunks(ints,4):
    foo += x1 + x2 + x3 + x4

for chunk in chunks(ints,4):
    foo += sum(chunk)

Another way:

import itertools
def chunks2(iterable,size,filler=None):
    it = itertools.chain(iterable,itertools.repeat(filler,size-1))
    chunk = tuple(itertools.islice(it,size))
    while len(chunk) == size:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

# x2, x3 and x4 could get the value 0 if the length is not
# a multiple of 4.
for x1,x2,x3,x4 in chunks2(ints,4,0):
    foo += x1 + x2 + x3 + x4
Markus Jarderot
  • 86,735
  • 21
  • 136
  • 138
  • 2
    +1 for using generators, seams like the most "pythonic" out of all suggested solutions – Sergey Golovchenko Jan 12 '09 at 03:23
  • 9
    It's rather long and clumsy for something so easy, which isn't very pythonic at all. I prefer S. Lott's version – zenazn Jan 12 '09 at 03:51
  • 4
    @zenazn: this will work on generator instances, slicing won't – Janus Troelsen Nov 25 '12 at 17:33
  • In addition to working properly with generators and other non-sliceable iterators, the first solution also doesn't require a "filler" value if the final chunk is smaller than `size`, which is sometimes desirable. – dano Aug 19 '14 at 20:27
  • 1
    Also +1 for generators. Other solutions require a `len` call and so don't work on other generators. – Cuadue Apr 10 '15 at 17:58
  • I would throw a try: block around and catch the value error exception to handle the <4 multiple issue. – Tom Myddeltyn May 05 '16 at 22:16
  • The first one is a good, simple version that doesn't use a `fillvalue` but still works on any iterable. Nice! – Yuval Jun 02 '17 at 07:18
  • Python 3.8 introduces assignment expressions, so we can now get even more terse: `while chunk := tuple(itertools.islice(it, size)): yield chunk`. Maybe this would also appease @zenazn? :D – Milosz Nov 30 '21 at 10:35
27

If you don't mind using an external package you could use iteration_utilities.grouper from iteration_utilties 1. It supports all iterables (not just sequences):

from iteration_utilities import grouper
seq = list(range(20))
for group in grouper(seq, 4):
    print(group)

which prints:

(0, 1, 2, 3)
(4, 5, 6, 7)
(8, 9, 10, 11)
(12, 13, 14, 15)
(16, 17, 18, 19)

In case the length isn't a multiple of the groupsize it also supports filling (the incomplete last group) or truncating (discarding the incomplete last group) the last one:

from iteration_utilities import grouper
seq = list(range(17))
for group in grouper(seq, 4):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)
# (16,)

for group in grouper(seq, 4, fillvalue=None):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)
# (16, None, None, None)

for group in grouper(seq, 4, truncate=True):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)

Benchmarks

I also decided to compare the run-time of a few of the mentioned approaches. It's a log-log plot grouping into groups of "10" elements based on a list of varying size. For qualitative results: Lower means faster:

enter image description here

At least in this benchmark the iteration_utilities.grouper performs best. Followed by the approach of Craz.

The benchmark was created with simple_benchmark1. The code used to run this benchmark was:

import iteration_utilities
import itertools
from itertools import zip_longest

def consume_all(it):
    return iteration_utilities.consume(it, None)

import simple_benchmark
b = simple_benchmark.BenchmarkBuilder()

@b.add_function()
def grouper(l, n):
    return consume_all(iteration_utilities.grouper(l, n))

def Craz_inner(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

@b.add_function()
def Craz(iterable, n, fillvalue=None):
    return consume_all(Craz_inner(iterable, n, fillvalue))

def nosklo_inner(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

@b.add_function()
def nosklo(seq, size):
    return consume_all(nosklo_inner(seq, size))

def SLott_inner(ints, chunk_size):
    for i in range(0, len(ints), chunk_size):
        yield ints[i:i+chunk_size]

@b.add_function()
def SLott(ints, chunk_size):
    return consume_all(SLott_inner(ints, chunk_size))

def MarkusJarderot1_inner(iterable,size):
    it = iter(iterable)
    chunk = tuple(itertools.islice(it,size))
    while chunk:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

@b.add_function()
def MarkusJarderot1(iterable,size):
    return consume_all(MarkusJarderot1_inner(iterable,size))

def MarkusJarderot2_inner(iterable,size,filler=None):
    it = itertools.chain(iterable,itertools.repeat(filler,size-1))
    chunk = tuple(itertools.islice(it,size))
    while len(chunk) == size:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

@b.add_function()
def MarkusJarderot2(iterable,size):
    return consume_all(MarkusJarderot2_inner(iterable,size))

@b.add_arguments()
def argument_provider():
    for exp in range(2, 20):
        size = 2**exp
        yield size, simple_benchmark.MultiArgument([[0] * size, 10])

r = b.run()

1 Disclaimer: I'm the author of the libraries iteration_utilities and simple_benchmark.

MSeifert
  • 145,886
  • 38
  • 333
  • 352
19

The more-itertools package has chunked method which does exactly that:

import more_itertools
for s in more_itertools.chunked(range(9), 4):
    print(s)

Prints

[0, 1, 2, 3]
[4, 5, 6, 7]
[8]

chunked returns the items in a list. If you'd prefer iterables, use ichunked.

teekarna
  • 1,004
  • 1
  • 10
  • 13
18

The ideal solution for this problem works with iterators (not just sequences). It should also be fast.

This is the solution provided by the documentation for itertools:

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return itertools.izip_longest(fillvalue=fillvalue, *args)

Using ipython's %timeit on my mac book air, I get 47.5 us per loop.

However, this really doesn't work for me since the results are padded to be even sized groups. A solution without the padding is slightly more complicated. The most naive solution might be:

def grouper(size, iterable):
    i = iter(iterable)
    while True:
        out = []
        try:
            for _ in range(size):
                out.append(i.next())
        except StopIteration:
            yield out
            break
        
        yield out

Simple, but pretty slow: 693 us per loop

The best solution I could come up with uses islice for the inner loop:

def grouper(size, iterable):
    it = iter(iterable)
    while True:
        group = tuple(itertools.islice(it, None, size))
        if not group:
            break
        yield group

With the same dataset, I get 305 us per loop.

Unable to get a pure solution any faster than that, I provide the following solution with an important caveat: If your input data has instances of filldata in it, you could get wrong answer.

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    # itertools.zip_longest on Python 3
    for x in itertools.izip_longest(*args, fillvalue=fillvalue):
        if x[-1] is fillvalue:
            yield tuple(v for v in x if v is not fillvalue)
        else:
            yield x

I really don't like this answer, but it is significantly faster. 124 us per loop

ShadowRanger
  • 143,180
  • 12
  • 188
  • 271
rhettg
  • 2,453
  • 18
  • 17
  • 1
    You can reduce runtime for recipe #3 by ~10-15% by moving it to the C layer (omitting `itertools` imports; `map` must be Py3 `map` or `imap`): `def grouper(n, it): return takewhile(bool, map(tuple, starmap(islice, repeat((iter(it), n)))))`. Your final function can be made less brittle by using a sentinel: get rid of the `fillvalue` argument; add a first line `fillvalue = object()`, then change the `if` check to `if i[-1] is fillvalue:` and the line it controls to `yield tuple(v for v in i if v is not fillvalue)`. Guarantees no value in `iterable` can be mistaken for the filler value. – ShadowRanger Sep 30 '16 at 01:14
  • BTW, big thumbs up on #4. I was about to post my optimization of #3 as a better answer (performance-wise) than what had been posted so far, but with the tweak to make it reliable, resilient #4 runs over twice as fast as optimized #3; I did not expect a solution with Python level loops (and no theoretical algorithmic differences AFAICT) to win. I assume #3 loses due to the expense of constructing/iterating `islice` objects (#3 wins if `n` is relatively large, e.g. number of groups is small, but that's optimizing for an uncommon case), but I didn't expect it to be quite that extreme. – ShadowRanger Sep 30 '16 at 01:26
  • For #4, the first branch of the conditional is only ever taken on the last iteration (the final tuple). Instead of reconstituting the final tuple all over again, cache the modulo of the length of the original iterable at the top and use that to slice off the unwanted padding from `izip_longest` on the final tuple: `yield i[:modulo]`. Also, for the `args` variable, tuple it instead of a list: `args = (iter(iterable),) * n`. Shaves a few more clock cycles off. Last, if we ignore fillvalue and assume `None`, the conditional can become `if None in i` for even more clock cycles. – Kumba Aug 14 '17 at 20:55
  • 1
    @Kumba: Your first suggestion assumes the input has known length. If it's an iterator/generator, not a collection with known length, there is nothing to cache. There's no real reason to use such an optimization anyway; you're optimizing the uncommon case (the last `yield`), while the common case is unaffected. – ShadowRanger Nov 13 '17 at 19:42
18

I needed a solution that would also work with sets and generators. I couldn't come up with anything very short and pretty, but it's quite readable at least.

def chunker(seq, size):
    res = []
    for el in seq:
        res.append(el)
        if len(res) == size:
            yield res
            res = []
    if res:
        yield res

List:

>>> list(chunker([i for i in range(10)], 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

Set:

>>> list(chunker(set([i for i in range(10)]), 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

Generator:

>>> list(chunker((i for i in range(10)), 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
bcoughlan
  • 25,987
  • 18
  • 90
  • 141
12
from itertools import izip_longest

def chunker(iterable, chunksize, filler):
    return izip_longest(*[iter(iterable)]*chunksize, fillvalue=filler)
jfs
  • 399,953
  • 195
  • 994
  • 1,670
Pedro Henriques
  • 1,708
  • 1
  • 14
  • 18
  • A readable way to do it is http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks#434411 – jfs Jan 12 '09 at 14:29
  • Note that in python 3 `izip_longest` is replaced by `zip_longest` – mdmjsh Oct 10 '19 at 18:52
11

Similar to other proposals, but not exactly identical, I like doing it this way, because it's simple and easy to read:

it = iter([1, 2, 3, 4, 5, 6, 7, 8, 9])
for chunk in zip(it, it, it, it):
    print chunk

>>> (1, 2, 3, 4)
>>> (5, 6, 7, 8)

This way you won't get the last partial chunk. If you want to get (9, None, None, None) as last chunk, just use izip_longest from itertools.

kriss
  • 23,497
  • 17
  • 97
  • 116
11

Since nobody's mentioned it yet here's a zip() solution:

>>> def chunker(iterable, chunksize):
...     return zip(*[iter(iterable)]*chunksize)

It works only if your sequence's length is always divisible by the chunk size or you don't care about a trailing chunk if it isn't.

Example:

>>> s = '1234567890'
>>> chunker(s, 3)
[('1', '2', '3'), ('4', '5', '6'), ('7', '8', '9')]
>>> chunker(s, 4)
[('1', '2', '3', '4'), ('5', '6', '7', '8')]
>>> chunker(s, 5)
[('1', '2', '3', '4', '5'), ('6', '7', '8', '9', '0')]

Or using itertools.izip to return an iterator instead of a list:

>>> from itertools import izip
>>> def chunker(iterable, chunksize):
...     return izip(*[iter(iterable)]*chunksize)

Padding can be fixed using @ΤΖΩΤΖΙΟΥ's answer:

>>> from itertools import chain, izip, repeat
>>> def chunker(iterable, chunksize, fillvalue=None):
...     it   = chain(iterable, repeat(fillvalue, chunksize-1))
...     args = [it] * chunksize
...     return izip(*args)
Community
  • 1
  • 1
jfs
  • 399,953
  • 195
  • 994
  • 1,670
9

As of Python 3.12, the itertools module gains a batched function that specifically covers iterating over batches of an input iterable, where the final batch may be incomplete (each batch is a tuple). Per the example code given in the docs:

>>> for batch in batched('ABCDEFG', 3):
...     print(batch)
...
('A', 'B', 'C')
('D', 'E', 'F')
('G',)

Performance notes:

The implementation of batched, like all itertools functions to date, is at the C layer, so it's capable of optimizations Python level code cannot match, e.g.

  • On each pull of a new batch, it proactively allocates a tuple of precisely the correct size (for all but the last batch), instead of building the tuple up element by element with amortized growth causing multiple reallocations (the way a solution calling tuple on an islice does)
  • It only needs to look up the .__next__ function of the underlying iterator once per batch, not n times per batch (the way a zip_longest((iter(iterable),) * n)-based approach does)
  • The check for the end case is a simple C level NULL check (trivial, and required to handle possible exceptions anyway)
  • Handling the end case is a C goto followed by a direct realloc (no making a copy into a smaller tuple) down to the already known final size, since it's tracking how many elements it has successfully pulled (no complex "create sentinel for use as fillvalue and do Python level if/else checks for each batch to see if it's empty, with the final batch requiring a search for where the fillvalue appeared last, to create the cut-down tuple" required by zip_longest-based solutions).

Between all these advantages, it should massively outperform any Python-level solution (even highly optimized ones that push most or all of the per-item work to the C layer), regardless of whether the input iterable is long or short, and regardless of whether the batch size and the size of the final (possibly incomplete) batch (zip_longest-based solutions using guaranteed unique fillvalues for safety are the best possible solution for almost all cases when itertools.batched is not available, but they can suffer in pathological cases of "few large batches, with final batch mostly, not completely, filled", especially pre-3.10 when bisect can't be used to optimize slicing off the fillvalues from O(n) linear search down to O(log n) binary search, but batched avoids that search entirely, so it won't experience pathological cases at all).

ShadowRanger
  • 143,180
  • 12
  • 188
  • 271
  • 3
    It's good to see this functionality coming to the standard library! I'll have to make this the accepted answer once 3.12 releases and starts becoming widely available. – Ben Blank Apr 11 '23 at 17:58
6

Another approach would be to use the two-argument form of iter:

from itertools import islice

def group(it, size):
    it = iter(it)
    return iter(lambda: tuple(islice(it, size)), ())

This can be adapted easily to use padding (this is similar to Markus Jarderot’s answer):

from itertools import islice, chain, repeat

def group_pad(it, size, pad=None):
    it = chain(iter(it), repeat(pad))
    return iter(lambda: tuple(islice(it, size)), (pad,) * size)

These can even be combined for optional padding:

_no_pad = object()
def group(it, size, pad=_no_pad):
    if pad == _no_pad:
        it = iter(it)
        sentinel = ()
    else:
        it = chain(iter(it), repeat(pad))
        sentinel = (pad,) * size
    return iter(lambda: tuple(islice(it, size)), sentinel)
Community
  • 1
  • 1
senderle
  • 145,869
  • 36
  • 209
  • 233
4

Using little functions and things really doesn't appeal to me; I prefer to just use slices:

data = [...]
chunk_size = 10000 # or whatever
chunks = [data[i:i+chunk_size] for i in xrange(0,len(data),chunk_size)]
for chunk in chunks:
    ...
Will
  • 73,905
  • 40
  • 169
  • 246
  • nice but no good for an indefinite stream which has no known `len`. you can do a test with `itertools.repeat` or `itertools.cycle`. – n611x007 Apr 24 '14 at 09:57
  • 1
    Also, eats up memory because of using a `[...for...]` [list comprehension](https://docs.python.org/2/reference/expressions.html#list-displays) to physically build a list instead of using a `(...for...)` [generator expression](https://docs.python.org/2/reference/expressions.html#generator-expressions) which would just care about the next element and spare memory – n611x007 Apr 24 '14 at 10:00
4

If the list is large, the highest-performing way to do this will be to use a generator:

def get_chunk(iterable, chunk_size):
    result = []
    for item in iterable:
        result.append(item)
        if len(result) == chunk_size:
            yield tuple(result)
            result = []
    if len(result) > 0:
        yield tuple(result)

for x in get_chunk([1,2,3,4,5,6,7,8,9,10], 3):
    print x

(1, 2, 3)
(4, 5, 6)
(7, 8, 9)
(10,)
Robert Rossney
  • 94,622
  • 24
  • 146
  • 218
  • (I think that MizardX's itertools suggestion is functionally equivalent to this.) – Robert Rossney Jan 12 '09 at 03:40
  • 1
    (Actually, on reflection, no I don't. itertools.islice returns an iterator, but it doesn't use an existing one.) – Robert Rossney Jan 12 '09 at 04:15
  • It is nice and simple, but for some reason even without conversion to tuple 4-7 times slower than the accepted grouper method on `iterable = range(100000000)` & `chunksize` up to 10000. – Valentas Oct 08 '18 at 08:22
  • However, in general I would recommend this method, because the accepted one can be extremely slow when checking for last item is slow https://docs.python.org/3/library/itertools.html#itertools.zip_longest – Valentas Oct 08 '18 at 09:51
4

Using map() instead of zip() fixes the padding issue in J.F. Sebastian's answer:

>>> def chunker(iterable, chunksize):
...   return map(None,*[iter(iterable)]*chunksize)

Example:

>>> s = '1234567890'
>>> chunker(s, 3)
[('1', '2', '3'), ('4', '5', '6'), ('7', '8', '9'), ('0', None, None)]
>>> chunker(s, 4)
[('1', '2', '3', '4'), ('5', '6', '7', '8'), ('9', '0', None, None)]
>>> chunker(s, 5)
[('1', '2', '3', '4', '5'), ('6', '7', '8', '9', '0')]
Jean-François Fabre
  • 137,073
  • 23
  • 153
  • 219
catwell
  • 6,770
  • 1
  • 23
  • 21
  • 2
    This is better handled with `itertools.izip_longest` (Py2)/`itertools.zip_longest` (Py3); this use of `map` is doubly-deprecated, and not available in Py3 (you can't pass `None` as the mapper function, and it stops when the shortest iterable is exhausted, not the longest; it doesn't pad). – ShadowRanger Oct 01 '16 at 01:34
3

One-liner, adhoc solution to iterate over a list x in chunks of size 4 -

for a, b, c, d in zip(x[0::4], x[1::4], x[2::4], x[3::4]):
    ... do something with a, b, c and d ...
Tutul
  • 726
  • 8
  • 15
3

To avoid all conversions to a list import itertools and:

>>> for k, g in itertools.groupby(xrange(35), lambda x: x/10):
...     list(g)

Produces:

... 
0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
2 [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
3 [30, 31, 32, 33, 34]
>>> 

I checked groupby and it doesn't convert to list or use len so I (think) this will delay resolution of each value until it is actually used. Sadly none of the available answers (at this time) seemed to offer this variation.

Obviously if you need to handle each item in turn nest a for loop over g:

for k,g in itertools.groupby(xrange(35), lambda x: x/10):
    for i in g:
       # do what you need to do with individual items
    # now do what you need to do with the whole group

My specific interest in this was the need to consume a generator to submit changes in batches of up to 1000 to the gmail API:

    messages = a_generator_which_would_not_be_smart_as_a_list
    for idx, batch in groupby(messages, lambda x: x/1000):
        batch_request = BatchHttpRequest()
        for message in batch:
            batch_request.add(self.service.users().messages().modify(userId='me', id=message['id'], body=msg_labels))
        http = httplib2.Http()
        self.credentials.authorize(http)
        batch_request.execute(http=http)
John Mee
  • 50,179
  • 34
  • 152
  • 186
  • What if the list you are chunking is something other than a sequence of ascending integers? – PaulMcG Oct 17 '15 at 14:33
  • @PaulMcGuire see [groupby](https://docs.python.org/3/library/itertools.html#itertools.groupby); given a function to describe order then elements of the iterable can be anything, right? – John Mee Oct 19 '15 at 04:55
  • 2
    Yes, I'm familiar with groupby. But if messages were the letters "ABCDEFG", then `groupby(messages, lambda x: x/3)` would give you a TypeError (for trying to divide a string by an int), not 3-letter groupings. Now if you did `groupby(enumerate(messages), lambda x: x[0]/3)` you might have something. But you didn't say that in your post. – PaulMcG Oct 19 '15 at 21:36
3

Unless I misses something, the following simple solution with generator expressions has not been mentioned. It assumes that both the size and the number of chunks are known (which is often the case), and that no padding is required:

def chunks(it, n, m):
    """Make an iterator over m first chunks of size n.
    """
    it = iter(it)
    # Chunks are presented as tuples.
    return (tuple(next(it) for _ in range(n)) for _ in range(m))
Alexey
  • 3,843
  • 6
  • 30
  • 44
2

With NumPy it's simple:

ints = array([1, 2, 3, 4, 5, 6, 7, 8])
for int1, int2 in ints.reshape(-1, 2):
    print(int1, int2)

output:

1 2
3 4
5 6
7 8
endolith
  • 25,479
  • 34
  • 128
  • 192
2
def chunker(iterable, n):
    """Yield iterable in chunk sizes.

    >>> chunks = chunker('ABCDEF', n=4)
    >>> chunks.next()
    ['A', 'B', 'C', 'D']
    >>> chunks.next()
    ['E', 'F']
    """
    it = iter(iterable)
    while True:
        chunk = []
        for i in range(n):
            try:
                chunk.append(next(it))
            except StopIteration:
                yield chunk
                raise StopIteration
        yield chunk

if __name__ == '__main__':
    import doctest

    doctest.testmod()
Kamil Sindi
  • 21,782
  • 19
  • 96
  • 120
2

In your second method, I would advance to the next group of 4 by doing this:

ints = ints[4:]

However, I haven't done any performance measurement so I don't know which one might be more efficient.

Having said that, I would usually choose the first method. It's not pretty, but that's often a consequence of interfacing with the outside world.

Greg Hewgill
  • 951,095
  • 183
  • 1,149
  • 1,285
2

I never want my chunks padded, so that requirement is essential. I find that the ability to work on any iterable is also requirement. Given that, I decided to extend on the accepted answer, https://stackoverflow.com/a/434411/1074659.

Performance takes a slight hit in this approach if padding is not wanted due to the need to compare and filter the padded values. However, for large chunk sizes, this utility is very performant.

#!/usr/bin/env python3
from itertools import zip_longest


_UNDEFINED = object()


def chunker(iterable, chunksize, fillvalue=_UNDEFINED):
    """
    Collect data into chunks and optionally pad it.

    Performance worsens as `chunksize` approaches 1.

    Inspired by:
        https://docs.python.org/3/library/itertools.html#itertools-recipes

    """
    args = [iter(iterable)] * chunksize
    chunks = zip_longest(*args, fillvalue=fillvalue)
    yield from (
        filter(lambda val: val is not _UNDEFINED, chunk)
        if chunk[-1] is _UNDEFINED
        else chunk
        for chunk in chunks
    ) if fillvalue is _UNDEFINED else chunks
frankish
  • 21
  • 1
  • 3
1

Yet another answer, the advantages of which are:

1) Easily understandable
2) Works on any iterable, not just sequences (some of the above answers will choke on filehandles)
3) Does not load the chunk into memory all at once
4) Does not make a chunk-long list of references to the same iterator in memory
5) No padding of fill values at the end of the list

That being said, I haven't timed it so it might be slower than some of the more clever methods, and some of the advantages may be irrelevant given the use case.

def chunkiter(iterable, size):
  def inneriter(first, iterator, size):
    yield first
    for _ in xrange(size - 1): 
      yield iterator.next()
  it = iter(iterable)
  while True:
    yield inneriter(it.next(), it, size)

In [2]: i = chunkiter('abcdefgh', 3)
In [3]: for ii in i:                                                
          for c in ii:
            print c,
          print ''
        ...:     
        a b c 
        d e f 
        g h 

Update:
A couple of drawbacks due to the fact the inner and outer loops are pulling values from the same iterator:
1) continue doesn't work as expected in the outer loop - it just continues on to the next item rather than skipping a chunk. However, this doesn't seem like a problem as there's nothing to test in the outer loop.
2) break doesn't work as expected in the inner loop - control will wind up in the inner loop again with the next item in the iterator. To skip whole chunks, either wrap the inner iterator (ii above) in a tuple, e.g. for c in tuple(ii), or set a flag and exhaust the iterator.

elhefe
  • 3,404
  • 3
  • 31
  • 45
1
def group_by(iterable, size):
    """Group an iterable into lists that don't exceed the size given.

    >>> group_by([1,2,3,4,5], 2)
    [[1, 2], [3, 4], [5]]

    """
    sublist = []

    for index, item in enumerate(iterable):
        if index > 0 and index % size == 0:
            yield sublist
            sublist = []

        sublist.append(item)

    if sublist:
        yield sublist
Wilfred Hughes
  • 29,846
  • 15
  • 139
  • 192
  • +1 it omits padding ; yours and bcoughlan['s](http://stackoverflow.com/a/18243990/611007) is very similar – n611x007 Apr 24 '14 at 09:54
1

You can use partition or chunks function from funcy library:

from funcy import partition

for a, b, c, d in partition(4, ints):
    foo += a * b * c * d

These functions also has iterator versions ipartition and ichunks, which will be more efficient in this case.

You can also peek at their implementation.

Suor
  • 2,845
  • 1
  • 22
  • 28
1

About solution gave by J.F. Sebastian here:

def chunker(iterable, chunksize):
    return zip(*[iter(iterable)]*chunksize)

It's clever, but has one disadvantage - always return tuple. How to get string instead?
Of course you can write ''.join(chunker(...)), but the temporary tuple is constructed anyway.

You can get rid of the temporary tuple by writing own zip, like this:

class IteratorExhausted(Exception):
    pass

def translate_StopIteration(iterable, to=IteratorExhausted):
    for i in iterable:
        yield i
    raise to # StopIteration would get ignored because this is generator,
             # but custom exception can leave the generator.

def custom_zip(*iterables, reductor=tuple):
    iterators = tuple(map(translate_StopIteration, iterables))
    while True:
        try:
            yield reductor(next(i) for i in iterators)
        except IteratorExhausted: # when any of iterators get exhausted.
            break

Then

def chunker(data, size, reductor=tuple):
    return custom_zip(*[iter(data)]*size, reductor=reductor)

Example usage:

>>> for i in chunker('12345', 2):
...     print(repr(i))
...
('1', '2')
('3', '4')
>>> for i in chunker('12345', 2, ''.join):
...     print(repr(i))
...
'12'
'34'
Community
  • 1
  • 1
GingerPlusPlus
  • 5,336
  • 1
  • 29
  • 52
  • 3
    Not a critique meant for you to change your answer, but rather a comment: Code is a liability. The more code you write the more space you create for bugs to hide. From this point of view, rewriting `zip` instead of using the existing one seems not to be the best idea. – Alfe Apr 15 '16 at 10:32
1

Here is a chunker without imports that supports generators:

def chunks(seq, size):
    it = iter(seq)
    while True:
        ret = tuple(next(it) for _ in range(size))
        if len(ret) == size:
            yield ret
        else:
            raise StopIteration()

Example of use:

>>> def foo():
...     i = 0
...     while True:
...         i += 1
...         yield i
...
>>> c = chunks(foo(), 3)
>>> c.next()
(1, 2, 3)
>>> c.next()
(4, 5, 6)
>>> list(chunks('abcdefg', 2))
[('a', 'b'), ('c', 'd'), ('e', 'f')]
Cuadue
  • 3,769
  • 3
  • 24
  • 38
1

I like this approach. It feels simple and not magical and supports all iterable types and doesn't require imports.

def chunk_iter(iterable, chunk_size):
it = iter(iterable)
while True:
    chunk = tuple(next(it) for _ in range(chunk_size))
    if not chunk:
        break
    yield chunk
BallpointBen
  • 9,406
  • 1
  • 32
  • 62
1

Quite pythonic here (you may also inline the body of the split_groups function)

import itertools
def split_groups(iter_in, group_size):
    return ((x for _, x in item) for _, item in itertools.groupby(enumerate(iter_in), key=lambda x: x[0] // group_size))

for x, y, z, w in split_groups(range(16), 4):
    foo += x * y + z * w
Andrey Cizov
  • 695
  • 10
  • 20
1

here is my go works on lists,iters and range ... lazily :

def chunker(it,size):
    rv = [] 
    for i,el in enumerate(it,1) :   
        rv.append(el)
        if i % size == 0 : 
            yield rv
            rv = []
    if rv : yield rv        

almost made it one-liner ;(

In [95]: list(chunker(range(9),2) )                                                                                                                                          
Out[95]: [[0, 1], [2, 3], [4, 5], [6, 7], [8]]

In [96]: list(chunker([1,2,3,4,5],2) )                                                                                                                                       
Out[96]: [[1, 2], [3, 4], [5]]

In [97]: list(chunker(iter(range(9)),2) )                                                                                                                                    
Out[97]: [[0, 1], [2, 3], [4, 5], [6, 7], [8]]

In [98]: list(chunker(range(9),25) )                                                                                                                                         
Out[98]: [[0, 1, 2, 3, 4, 5, 6, 7, 8]]

In [99]: list(chunker(range(9),1) )                                                                                                                                          
Out[99]: [[0], [1], [2], [3], [4], [5], [6], [7], [8]]

In [101]: %timeit list(chunker(range(101),2) )                                                                                                                               
11.3 µs ± 68.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
sten
  • 7,028
  • 9
  • 41
  • 63
0

At first, I designed it to split strings into substrings to parse string containing hex.
Today I turned it into complex, but still simple generator.

def chunker(iterable, size, reductor, condition):
    it = iter(iterable)
    def chunk_generator():
        return (next(it) for _ in range(size))
    chunk = reductor(chunk_generator())
    while condition(chunk):
        yield chunk
        chunk = reductor(chunk_generator())

Arguments:

Obvious ones

  • iterable is any iterable / iterator / generator containg / generating / iterating over input data,
  • size is, of course, size of chunk you want get,

More interesting

  • reductor is a callable, which receives generator iterating over content of chunk.
    I'd expect it to return sequence or string, but I don't demand that.

    You can pass as this argument for example list, tuple, set, frozenset,
    or anything fancier. I'd pass this function, returning string
    (provided that iterable contains / generates / iterates over strings):

    def concatenate(iterable):
        return ''.join(iterable)
    

    Note that reductor can cause closing generator by raising exception.

  • condition is a callable which receives anything what reductor returned.
    It decides to approve & yield it (by returning anything evaluating to True),
    or to decline it & finish generator's work (by returning anything other or raising exception).

    When number of elements in iterable is not divisible by size, when it gets exhausted, reductor will receive generator generating less elements than size.
    Let's call these elements lasts elements.

    I invited two functions to pass as this argument:

    • lambda x:x - the lasts elements will be yielded.

    • lambda x: len(x)==<size> - the lasts elements will be rejected.
      replace <size> using number equal to size

GingerPlusPlus
  • 5,336
  • 1
  • 29
  • 52
0

It is easy to make itertools.groupby work for you to get an iterable of iterables, without creating any temporary lists:

groupby(iterable, (lambda x,y: (lambda z: x.next()/y))(count(),100))

Don't get put off by the nested lambdas, outer lambda runs just once to put count() generator and the constant 100 into the scope of the inner lambda.

I use this to send chunks of rows to mysql.

for k,v in groupby(bigdata, (lambda x,y: (lambda z: x.next()/y))(count(),100))):
    cursor.executemany(sql, v)
topkara
  • 886
  • 9
  • 15
0

This answer splits a list of strings, f.ex. to achieve PEP8-line length compliance:

def split(what, target_length=79):
    '''splits list of strings into sublists, each 
    having string length at most 79'''
    out = [[]]
    while what:
        if len("', '".join(out[-1])) + len(what[0]) < target_length:
            out[-1].append(what.pop(0))
        else:
            if not out[-1]: # string longer than target_length
                out[-1] = [what.pop(0)]
            out.append([])
    return out

Use as

>>> split(['deferred_income', 'long_term_incentive', 'restricted_stock_deferred', 'shared_receipt_with_poi', 'loan_advances', 'from_messages', 'other', 'director_fees', 'bonus', 'total_stock_value', 'from_poi_to_this_person', 'from_this_person_to_poi', 'restricted_stock', 'salary', 'total_payments', 'exercised_stock_options'], 75)
[['deferred_income', 'long_term_incentive', 'restricted_stock_deferred'], ['shared_receipt_with_poi', 'loan_advances', 'from_messages', 'other'], ['director_fees', 'bonus', 'total_stock_value', 'from_poi_to_this_person'], ['from_this_person_to_poi', 'restricted_stock', 'salary', 'total_payments'], ['exercised_stock_options']]
serv-inc
  • 35,772
  • 9
  • 166
  • 188
0

I am hoping that by turning an iterator out of a list i am not simply copying a slice of the list. Generators can be sliced and they will automatically still be a generator, while lists will be sliced into huge chunks of 1000 entries, which is less efficient.

def iter_group(iterable, batch_size:int):
    length = len(iterable)
    start = batch_size*-1
    end = 0
    while(end < length):
        start += batch_size
        end += batch_size
        if type(iterable) == list:
            yield (iterable[i] for i in range(start,min(length-1,end)))
        else:
            yield iterable[start:end]

Usage:

items = list(range(1,1251))

for item_group in iter_group(items, 1000):
    for item in item_group:
        print(item)
Ben
  • 2,122
  • 2
  • 28
  • 48
0

In my special case, I needed to pad the items to repeat the last element until it reaches the size, so I changed this answer to suit my needs.

Example of needed input output with size 4:

Input = [1,2,3,4,5,6,7,8]
Output= [[1,2,3,4], [5,6,7,8]]

Input = [[1,2,3,4,5,6,7]]
Output= [[1,2,3,4], [5,6,7,7]]

Input = [1,2,3,4,5]
Output= [[1,2,3,4], [5,5,5,5]]
def chunker(seq, size):
    res = []
    for el in seq:
        res.append(el)
        if len(res) == size:
            yield res
            res = []
    if res:
        res = res + (size - len(res)) * [res[-1]] 
        yield res
Amir Pourmand
  • 519
  • 6
  • 17
-1

If the lists are the same size, you can combine them into lists of 4-tuples with zip(). For example:

# Four lists of four elements each.

l1 = range(0, 4)
l2 = range(4, 8)
l3 = range(8, 12)
l4 = range(12, 16)

for i1, i2, i3, i4 in zip(l1, l2, l3, l4):
    ...

Here's what the zip() function produces:

>>> print l1
[0, 1, 2, 3]
>>> print l2
[4, 5, 6, 7]
>>> print l3
[8, 9, 10, 11]
>>> print l4
[12, 13, 14, 15]
>>> print zip(l1, l2, l3, l4)
[(0, 4, 8, 12), (1, 5, 9, 13), (2, 6, 10, 14), (3, 7, 11, 15)]

If the lists are large, and you don't want to combine them into a bigger list, use itertools.izip(), which produces an iterator, rather than a list.

from itertools import izip

for i1, i2, i3, i4 in izip(l1, l2, l3, l4):
    ...
Brian Clapper
  • 25,705
  • 7
  • 65
  • 65
-2

There doesn't seem to be a pretty way to do this. Here is a page that has a number of methods, including:

def split_seq(seq, size):
    newseq = []
    splitsize = 1.0/size*len(seq)
    for i in range(size):
        newseq.append(seq[int(round(i*splitsize)):int(round((i+1)*splitsize))])
    return newseq
Harley Holcombe
  • 175,848
  • 15
  • 70
  • 63
-2

Why not use list comprehension

l = [1 , 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
n = 4
filler = 0
fills = len(l) % n
chunks = ((l + [filler] * fills)[x * n:x * n + n] for x in range(int((len(l) + n - 1)/n)))
print(chunks)

[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 0]]
Mohammad Azim
  • 2,604
  • 20
  • 21