Short Answer
If you are using PostgreSQL or Oracle, you can use, Django's builtin iterator:
queryset.iterator(chunk_size=1000)
This causes Django to use server-side cursors and not cache models as it iterates through the queryset. As of Django 4.1, this will even work with prefetch_related
.
For other databases, you can use the following:
def queryset_iterator(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
for obj in page:
yield obj
pk = obj.pk
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
If you want to get back pages rather than individual objects to combine with other optimizations such as bulk_update
, use this:
def queryset_to_pages(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
yield page
pk = max(obj.pk for obj in page)
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
Performance Profiling on PostgreSQL
I profiled a number of different approaches on a PostgreSQL table with about 200,000 rows on Django 3.2 and Postgres 13. For every query, I added up the sum of the ids, both to ensure that Django was actually retrieving the objects and so that I could verify correctness of iteration between queries. All of the timings were taken after several iterations over the table in question to minimize caching advantages of later tests.
Basic Iteration
The basic approach is just iterating over the table. The main issue with this approach is that the amount of memory used is not constant; it grows with the size of the table, and I've seen this run out of memory on larger tables.
x = sum(i.id for i in MyModel.objects.all())
Wall time: 3.53 s, 22MB of memory (BAD)
Django Iterator
The Django iterator (at least as of Django 3.2) fixes the memory issue with minor performance benefit. Presumably this comes from Django spending less time managing cache.
assert sum(i.id for i in MyModel.objects.all().iterator(chunk_size=1000)) == x
Wall time: 3.11 s, <1MB of memory
Custom Iterator
The natural comparison point is attempting to do the paging ourselves by progresively increased queries on the primary key. While this is an improvement over naieve iteration in that it has constant memory, it actually loses to Django's built-in iterator on speed because it makes more database queries.
def queryset_iterator(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
for obj in page:
yield obj
pk = obj.pk
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
assert sum(i.id for i in queryset_iterator(MyModel.objects.all())) == x
Wall time: 3.65 s, <1MB of memory
Custom Paging Function
The main reason to use the custom iteration is so that you can get the results in pages. This function is very useful to then plug in to bulk-updates while only using constant memory. It's a bit slower than queryset_iterator in my tests and I don't have a coherent theory as to why, but the slowdown isn't substantial.
def queryset_to_pages(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
yield page
pk = max(obj.pk for obj in page)
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
assert sum(i.id for page in queryset_to_pages(MyModel.objects.all()) for i in page) == x
Wall time: 4.49 s, <1MB of memory
Alternative Custom Paging Function
Given that Django's queryset iterator is faster than doing paging ourselves, the queryset pager can be alternately implemented to use it. It's a little bit faster than doing paging ourselves, but the implementation is messier. Readability matters, which is why my personal preference is the previous paging function, but this one can be better if your queryset doesn't have a primary key in the results (for whatever reason).
def queryset_to_pages2(queryset, page_size=1000):
page = []
page_count = 0
for obj in queryset.iterator():
page.append(obj)
page_count += 1
if page_count == page_size:
yield page
page = []
page_count = 0
yield page
assert sum(i.id for page in queryset_to_pages2(MyModel.objects.all()) for i in page) == x
Wall time: 4.33 s, <1MB of memory
Bad Approaches
The following are approaches you should never use (many of which are suggested in the question) along with why.
Do NOT Use Slicing on an Unordered Queryset
Whatever you do, do NOT slice an unordered queryset. This does not correctly iterate over the table. The reason for this is that the slice operation does a SQL limit + offset query based on your queryset and that django querysets have no order guarantee unless you use order_by
. Additionally, PostgreSQL does not have a default order by, and the Postgres docs specifically warn against using limit + offset without order by. As a result, each time you take a slice, you are getting a non-deterministic slice of your table, which means your slices may not be overlapping and won't cover all rows of the table between them. In my experience, this only happens if something else is modifying data in the table while you are doing the iteration, which only makes this problem more pernicious because it means the bug might not show up if you are testing your code in isolation.
def very_bad_iterator(queryset, page_size=1000):
counter = 0
count = queryset.count()
while counter < count:
for model in queryset[counter:counter+page_size].iterator():
yield model
counter += page_size
assert sum(i.id for i in very_bad_iterator(MyModel.objects.all())) == x
Assertion Error; i.e. INCORRECT RESULT COMPUTED!!!
Do NOT use Slicing for Whole-Table Iteration in General
Even if we order the queryset, list slicing is abysmal from a performance perspective. This is because SQL offset is a linear time operation, which means that a limit + offset paged iteration of a table will be quadratic time, which you absolutely do not want.
def bad_iterator(queryset, page_size=1000):
counter = 0
count = queryset.count()
while counter < count:
for model in queryset.order_by("id")[counter:counter+page_size].iterator():
yield model
counter += page_size
assert sum(i.id for i in bad_iterator(MyModel.objects.all())) == x
Wall time: 15s (BAD), <1MB of memory
Do NOT use Django's Paginator for Whole-Table Iteration
Django comes with a built-in Paginator. It may be tempting to think that is appropriate for doing a paged iteration of a database, but it is not. The point of Paginator is for returning a single page of a result to a UI or an API endpoint. It is substantially slower than any of the good apporaches at iterating over a table.
from django.core.paginator import Paginator
def bad_paged_iterator(queryset, page_size=1000):
p = Paginator(queryset.order_by("pk"), page_size)
for i in p.page_range:
yield p.get_page(i)
assert sum(i.id for page in bad_paged_iterator(MyModel.objects.all()) for i in page) == x
Wall time: 13.1 s (BAD), <1MB of memory