43

I have a very large table. It's currently in a MySQL database. I use django.

I need to iterate over each element of the table to pre-compute some particular data (maybe if I was better I could do otherwise but that's not the point).

I'd like to keep the iteration as fast as possible with a constant usage of memory.

As it is already clearly in Limiting Memory Use in a *Large* Django QuerySet and Why is iterating through a large Django QuerySet consuming massive amounts of memory?, a simple iteration over all objects in django will kill the machine as it will retrieve ALL objects from the database.

Towards a solution

First of all, to reduce your memory consumption you should be sure DEBUG is False (or monkey patch the cursor: turn off SQL logging while keeping settings.DEBUG?) to be sure django isn't storing stuff in connections for debug.

But even with that,

for model in Model.objects.all()

is a no go.

Not even with the slightly improved form:

for model in Model.objects.all().iterator()

Using iterator() will save you some memory by not storing the result of the cache internally (though not necessarily on PostgreSQL!); but will still retrieve the whole objects from the database, apparently.

A naive solution

The solution in the first question is to slice the results based on a counter by a chunk_size. There are several ways to write it, but basically they all come down to an OFFSET + LIMIT query in SQL.

something like:

qs = Model.objects.all()
counter = 0
count = qs.count()
while counter < count:     
    for model in qs[counter:counter+chunk_size].iterator()
        yield model
    counter += chunk_size

While this is memory efficient (constant memory usage proportional to chunk_size), it's really poor in term of speed: as OFFSET grows, both MySQL and PostgreSQL (and likely most DBs) will start choking and slowing down.

A better solution

A better solution is available in this post by Thierry Schellenbach. It filters on the PK, which is way faster than offsetting (how fast probably depends on the DB)

pk = 0
last_pk = qs.order_by('-pk')[0].pk
queryset = qs.order_by('pk')
while pk < last_pk:
    for row in qs.filter(pk__gt=pk)[:chunksize]:
        pk = row.pk
        yield row
    gc.collect()

This is starting to get satisfactory. Now Memory = O(C), and Speed ~= O(N)

Issues with the "better" solution

The better solution only works when the PK is available in the QuerySet. Unluckily, that's not always the case, in particular when the QuerySet contains combinations of distinct (group_by) and/or values (ValueQuerySet).

For that situation the "better solution" cannot be used.

Can we do better?

Now I'm wondering if we can go faster and avoid the issue regarding QuerySets without PK. Maybe using something that I found in other answers, but only in pure SQL: using cursors.

Since I'm quite bad with raw SQL, in particular in Django, here comes the real question:

how can we build a better Django QuerySet Iterator for large tables

My take from what I've read is that we should use server-side cursors (apparently (see references) using a standard Django Cursor would not achieve the same result, because by default both python-MySQL and psycopg connectors cache the results).

Would this really be a faster (and/or more efficient) solution?

Can this be done using raw SQL in django? Or should we write specific python code depending on the database connector?

Server Side cursors in PostgreSQL and in MySQL

That's as far as I could get for the moment...

a Django chunked_iterator()

Now, of course the best would have this method work as queryset.iterator(), rather than iterate(queryset), and be part of django core or at least a pluggable app.

Update Thanks to "T" in the comments for finding a django ticket that carry some additional information. Differences in connector behaviors make it so that probably the best solution would be to create a specific chunked method rather than transparently extending iterator (sounds like a good approach to me). An implementation stub exists, but there hasn't been any work in a year, and it does not look like the author is ready to jump on that yet.

Additional Refs:

  1. Why does MYSQL higher LIMIT offset slow the query down?
  2. How can I speed up a MySQL query with a large offset in the LIMIT clause?
  3. http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/
  4. postgresql: offset + limit gets to be very slow
  5. Improving OFFSET performance in PostgreSQL
  6. http://www.depesz.com/2011/05/20/pagination-with-fixed-order/
  7. How to get a row-by-row MySQL ResultSet in python Server Side Cursor in MySQL

Edits:

Django 1.6 is adding persistent database connections

Django Database Persistent Connections

This should facilitate, under some conditions, using cursors. Still it's outside my current skills (and time to learn) how to implement such a solution..

Also, the "better solution" definitely does not work in all situations and cannot be used as a generic approach, only a stub to be adapted case by case...

Zags
  • 37,389
  • 14
  • 105
  • 140
Stefano
  • 18,083
  • 13
  • 64
  • 79
  • 6
    Wow, that was a really well-researched question! :) – Daniel Eriksson Jan 03 '13 at 18:12
  • Thanks @DanielEriksson, I thought I would manage to get it all working by myself but I'm not yet there... – Stefano Jan 03 '13 at 18:14
  • Oh, and additional solutions involve building custom indexes (eg. see the Pagination solution), but I was hoping for a more general solution – Stefano Jan 03 '13 at 18:15
  • 1
    I think this is actually being done right now, check this out: https://code.djangoproject.com/ticket/16614 It seems that for .iterator and similar use-cases the cursor will now be an SSCursor, which is what you want instead of the default, and this will happen transparently. – t.dubrownik Jan 04 '13 at 01:56
  • @t.dubrownik wow! man I searched, but I did not find that in the most obvious place. There is precious additional information there, although not full consensus on whether that could happen transparently or through a separate method chunked; unluckily it sounds like development is stalled. Let's see if we collect something interesting. I'll go update my question :) – Stefano Jan 04 '13 at 07:42
  • I realise that you are looking for a way to iterate however if one of the options you are looking at is using raw SQL are you sure you can't just use SQL to handle this? I've found very few cases where SQL cursors can't be replaced with a more efficient set based solution – Macros Jan 07 '13 at 18:47
  • @Macros the point is that I'm not proficient enough in SQL to build an efficient iteration, but indeed I'm very open to suggestions on how to do this from django with raw sql! – Stefano Jan 08 '13 at 09:07
  • If you could share some information on the tables / data and what you are trying to achieve by iterating then if there is an Sql based solution, SO will find it.. – Macros Jan 09 '13 at 00:10
  • @Macros there's nothing special about the table, just a quite large legacy one (26 Million rows, 12chars string PK, and many columns) that I need to go through row by row to build some data analysis, really too complex to be done directly in sql. I'm not looking for anything tailor made, I just would like the "optimal" iterator in django - there's no good one by default, and I'm just thinking there might be a better solution than what I already found. – Stefano Jan 10 '13 at 00:24
  • 1
    WARNING: Do NOT do iterative slices on an unordered queryset in Django; ordering is nondeterministic and your slices will likely end up being overlapping and not actually covering the whole table. – Zags Aug 15 '22 at 17:09

3 Answers3

4

Short Answer

If you are using PostgreSQL or Oracle, you can use, Django's builtin iterator:

queryset.iterator(chunk_size=1000)

This causes Django to use server-side cursors and not cache models as it iterates through the queryset. As of Django 4.1, this will even work with prefetch_related.

For other databases, you can use the following:

def queryset_iterator(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        for obj in page:
            yield obj
            pk = obj.pk
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

If you want to get back pages rather than individual objects to combine with other optimizations such as bulk_update, use this:

def queryset_to_pages(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        yield page
        pk = max(obj.pk for obj in page)
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

Performance Profiling on PostgreSQL

I profiled a number of different approaches on a PostgreSQL table with about 200,000 rows on Django 3.2 and Postgres 13. For every query, I added up the sum of the ids, both to ensure that Django was actually retrieving the objects and so that I could verify correctness of iteration between queries. All of the timings were taken after several iterations over the table in question to minimize caching advantages of later tests.

Basic Iteration

The basic approach is just iterating over the table. The main issue with this approach is that the amount of memory used is not constant; it grows with the size of the table, and I've seen this run out of memory on larger tables.

x = sum(i.id for i in MyModel.objects.all())

Wall time: 3.53 s, 22MB of memory (BAD)

Django Iterator

The Django iterator (at least as of Django 3.2) fixes the memory issue with minor performance benefit. Presumably this comes from Django spending less time managing cache.

assert sum(i.id for i in MyModel.objects.all().iterator(chunk_size=1000)) == x

Wall time: 3.11 s, <1MB of memory

Custom Iterator

The natural comparison point is attempting to do the paging ourselves by progresively increased queries on the primary key. While this is an improvement over naieve iteration in that it has constant memory, it actually loses to Django's built-in iterator on speed because it makes more database queries.

def queryset_iterator(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        for obj in page:
            yield obj
            pk = obj.pk
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

assert sum(i.id for i in queryset_iterator(MyModel.objects.all())) == x

Wall time: 3.65 s, <1MB of memory

Custom Paging Function

The main reason to use the custom iteration is so that you can get the results in pages. This function is very useful to then plug in to bulk-updates while only using constant memory. It's a bit slower than queryset_iterator in my tests and I don't have a coherent theory as to why, but the slowdown isn't substantial.

def queryset_to_pages(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        yield page
        pk = max(obj.pk for obj in page)
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

assert sum(i.id for page in queryset_to_pages(MyModel.objects.all()) for i in page) == x

Wall time: 4.49 s, <1MB of memory

Alternative Custom Paging Function

Given that Django's queryset iterator is faster than doing paging ourselves, the queryset pager can be alternately implemented to use it. It's a little bit faster than doing paging ourselves, but the implementation is messier. Readability matters, which is why my personal preference is the previous paging function, but this one can be better if your queryset doesn't have a primary key in the results (for whatever reason).

def queryset_to_pages2(queryset, page_size=1000):
    page = []
    page_count = 0
    for obj in queryset.iterator():
        page.append(obj)
        page_count += 1
        if page_count == page_size:
            yield page
            page = []
            page_count = 0
    yield page

assert sum(i.id for page in queryset_to_pages2(MyModel.objects.all()) for i in page) == x

Wall time: 4.33 s, <1MB of memory


Bad Approaches

The following are approaches you should never use (many of which are suggested in the question) along with why.

Do NOT Use Slicing on an Unordered Queryset

Whatever you do, do NOT slice an unordered queryset. This does not correctly iterate over the table. The reason for this is that the slice operation does a SQL limit + offset query based on your queryset and that django querysets have no order guarantee unless you use order_by. Additionally, PostgreSQL does not have a default order by, and the Postgres docs specifically warn against using limit + offset without order by. As a result, each time you take a slice, you are getting a non-deterministic slice of your table, which means your slices may not be overlapping and won't cover all rows of the table between them. In my experience, this only happens if something else is modifying data in the table while you are doing the iteration, which only makes this problem more pernicious because it means the bug might not show up if you are testing your code in isolation.

def very_bad_iterator(queryset, page_size=1000):
    counter = 0
    count = queryset.count()
    while counter < count:     
        for model in queryset[counter:counter+page_size].iterator():
            yield model
        counter += page_size

assert sum(i.id for i in very_bad_iterator(MyModel.objects.all())) == x

Assertion Error; i.e. INCORRECT RESULT COMPUTED!!!

Do NOT use Slicing for Whole-Table Iteration in General

Even if we order the queryset, list slicing is abysmal from a performance perspective. This is because SQL offset is a linear time operation, which means that a limit + offset paged iteration of a table will be quadratic time, which you absolutely do not want.

def bad_iterator(queryset, page_size=1000):
    counter = 0
    count = queryset.count()
    while counter < count:     
        for model in queryset.order_by("id")[counter:counter+page_size].iterator():
            yield model
        counter += page_size

assert sum(i.id for i in bad_iterator(MyModel.objects.all())) == x

Wall time: 15s (BAD), <1MB of memory

Do NOT use Django's Paginator for Whole-Table Iteration

Django comes with a built-in Paginator. It may be tempting to think that is appropriate for doing a paged iteration of a database, but it is not. The point of Paginator is for returning a single page of a result to a UI or an API endpoint. It is substantially slower than any of the good apporaches at iterating over a table.

from django.core.paginator import Paginator

def bad_paged_iterator(queryset, page_size=1000):
    p = Paginator(queryset.order_by("pk"), page_size)
    for i in p.page_range:
        yield p.get_page(i)
        
assert sum(i.id for page in bad_paged_iterator(MyModel.objects.all()) for i in page) == x

Wall time: 13.1 s (BAD), <1MB of memory

Zags
  • 37,389
  • 14
  • 105
  • 140
3

The essential answer: use raw SQL with server-side cursors.

Sadly, until Django 1.5.2 there is no formal way to create a server-side MySQL cursor (not sure about other database engines). So I wrote some magic code to solve this problem.

For Django 1.5.2 and MySQLdb 1.2.4, the following code will work. Also, it's well commented.

Caution: This is not based on public APIs, so it will probably break in future Django versions.

# This script should be tested under a Django shell, e.g., ./manage.py shell

from types import MethodType

import MySQLdb.cursors
import MySQLdb.connections
from django.db import connection
from django.db.backends.util import CursorDebugWrapper


def close_sscursor(self):
    """An instance method which replace close() method of the old cursor.

    Closing the server-side cursor with the original close() method will be
    quite slow and memory-intensive if the large result set was not exhausted,
    because fetchall() will be called internally to get the remaining records.
    Notice that the close() method is also called when the cursor is garbage 
    collected.

    This method is more efficient on closing the cursor, but if the result set
    is not fully iterated, the next cursor created from the same connection
    won't work properly. You can avoid this by either (1) close the connection 
    before creating a new cursor, (2) iterate the result set before closing 
    the server-side cursor.
    """
    if isinstance(self, CursorDebugWrapper):
        self.cursor.cursor.connection = None
    else:
        # This is for CursorWrapper object
        self.cursor.connection = None


def get_sscursor(connection, cursorclass=MySQLdb.cursors.SSCursor):
    """Get a server-side MySQL cursor."""
    if connection.settings_dict['ENGINE'] != 'django.db.backends.mysql':
        raise NotImplementedError('Only MySQL engine is supported')
    cursor = connection.cursor()
    if isinstance(cursor, CursorDebugWrapper):
        # Get the real MySQLdb.connections.Connection object
        conn = cursor.cursor.cursor.connection
        # Replace the internal client-side cursor with a sever-side cursor
        cursor.cursor.cursor = conn.cursor(cursorclass=cursorclass)
    else:
        # This is for CursorWrapper object
        conn = cursor.cursor.connection
        cursor.cursor = conn.cursor(cursorclass=cursorclass)
    # Replace the old close() method
    cursor.close = MethodType(close_sscursor, cursor)
    return cursor


# Get the server-side cursor
cursor = get_sscursor(connection)

# Run a query with a large result set. Notice that the memory consumption is low.
cursor.execute('SELECT * FROM million_record_table')

# Fetch a single row, fetchmany() rows or iterate it via "for row in cursor:"
cursor.fetchone()

# You can interrupt the iteration at any time. This calls the new close() method,
# so no warning is shown.
cursor.close()

# Connection must be close to let new cursors work properly. see comments of
# close_sscursor().
connection.close()
Rockallite
  • 16,437
  • 7
  • 54
  • 48
1

There is another option available. It wouldn't make the iteration faster, (in fact it would probably slow it down), but it would make it use far less memory. Depending on your needs this may be appropriate.

large_qs = MyModel.objects.all().values_list("id", flat=True)
for model_id in large_qs:
    model_object = MyModel.objects.get(id=model_id)
    # do whatever you need to do with the model here

Only the ids are loaded into memory, and the objects are retrieved and discarded as needed. Note the increased database load and slower runtime, both tradeoffs for the reduction in memory usage.

I've used this when running async scheduled tasks on worker instances, for which it doesn't really matter if they are slow, but if they try to use way too much memory they may crash the instance and therefore abort the process.

Clay Wardell
  • 14,846
  • 13
  • 44
  • 65
  • Thanks Clay, it is indeed an option that I forgot to add but in my case this didn't work. My database has about 30 million rows, and the PK is a string of 12 characters (not my choice!). This makes retrieving a list of IDs VERY memory hungry, and iterating over each object makes the retrieval EXTREMELY slow! – Stefano Jan 08 '13 at 09:00
  • For reference, using an improved version of the "better solution" up that that only retrieves required values made working over those 30M rows run in less than 30 minutes on a 512Mb, double core instance, which is already decent. The naive solution would keep slowing down and I never were patient enough to wait for the end (hours!). As you say, your solution will be slower, but also retrieving the list of IDs make it require quite a lot of memory. – Stefano Jan 08 '13 at 09:05
  • Yeah, it looks like on its own this will not work for you. But, as you mentioned, using the option to only retrieve what you need in conjunction with some other technique might be helpful. – Clay Wardell Jan 08 '13 at 16:06