33

I recently upgraded from Django 1.4 to Django 1.7 and since I keep getting the following error message for some scripts, sometimes:

OperationalError: (2006, 'MySQL server has gone away')

The scripts are very long or continuously running tasks that might involve phases of not communicating with the db for several minutes, so the connection times out. However, before I upgraded, that was no problem, as Django seemed to automatically re-establish a connection. Now it doesn't which means the tasks often stop and fail in the middle.

Does anyone know what has changed and how I can fix it?

Is it perhaps related to that ticket/fix: https://code.djangoproject.com/ticket/21463

Thanks a lot!

Se Norm
  • 1,715
  • 5
  • 23
  • 40
  • Check [this](https://code.djangoproject.com/ticket/21597#comment:29) and [this](http://piwik.org/faq/troubleshooting/faq_183/). – Anshul Goyal Nov 23 '14 at 07:11

10 Answers10

55

The reason of such behavior is persistent connect to database, which was introduced in Django 1.6.

To prevent connection timeout error you should set CONN_MAX_AGE in settings.py to value which is less than wait_timeout in MySQL config (my.cnf). In that case Django detects that connection need to be reopen earlier than MySQL throws it. Default value for MySQL 5.7 is 28800 seconds.

settings.py:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'CONN_MAX_AGE': 3600,
        <other params here>
    }
}

Documentation: https://docs.djangoproject.com/en/1.7/ref/settings/#conn-max-age

my.cnf:

wait_timeout = 28800

Documentation: https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout

Eugene Soldatov
  • 9,755
  • 2
  • 35
  • 43
  • 1
    Thank you SOOOO much. I just updated an application with django cms 3.0.3, django 1.6.5 and zinnia all using AUTH_USER_MODEL set to an email username model. Had a nightmare of a time. finally got all working, and 'mysql server went away' errors from ngnix only (runserver worked fine). You are indeed a scholar! – MagicLAMP Feb 19 '19 at 04:40
  • Though it is odd that I don't need to use the CONN_MAX_AGE setting on any of my other Django 1.7+ apps, and I have upgraded past 1.6-1.7 before. I can only assume that wait_timeout in MySQL config is different on the other servers. – MagicLAMP Feb 19 '19 at 05:12
  • I'm glad to hear that it was helpful for you) – Eugene Soldatov Feb 19 '19 at 14:00
  • 2
    In actual fact, this did not solve my problem. It only occurred when using uwsgi and nginx. I suspect it is more to do with this: https://serverfault.com/questions/407612/error-2006-mysql-server-has-gone-away – MagicLAMP Jun 24 '19 at 06:42
22

I have a running background process rqworker which executes separate jobs to refresh some data after some user actions.

I always get OperationalError: (2006, 'MySQL server has gone away') if there were no user action during more than wait_timeout seconds. Even if I set CONN_MAX_AGE less than MySQL wait_timeout.

As I understand, changing of CONN_MAX_AGE could help if Django checked and close its connections automatically by this timeout. But Django 1.7.x checks it before and after each request only (see django/db/init.py#L101-L112 ).

As described in Django ticket 15119, we can see that Django was making a ping to validate if the connection was alive before executing every query. This behavior was fixed in commit 282b2f4.

Django developers gave one short answer for all questions like this in https://code.djangoproject.com/ticket/21597#comment:29

Therefore my rqworker process has to validate connection itself for each new job. (Note: if we close a connection then Django will create a new one).

I’m going to use Django per request approach for jobs and call django.db.close_old_connections() before and after every job. And yes, CONN_MAX_AGE should be less than MySQL wait_timeout, because Django doesn't check if MySQL server has gone away in django.db.close_old_connections() method.

ta2-1
  • 411
  • 4
  • 6
19

On Django 1.9: I had an ongoing Django Shell running in a unix screen left unattended for over 48 hours. When I returned back to it and ran a <some_model>.objects.filter it threw OperationalError: (2006, 'MySQL server has gone away')

A quick import django.db; django.db.close_old_connections() did the trick for me.

I couldn't find documentation for close_old_connections() on Django Docs for 1.9 however here's the direct link to its implementation in Django Codebase on Github

Deep
  • 1,151
  • 11
  • 18
9

In django 1.6, when the wait_timeout passed (of mysql), then DB access cause the (2006, 'MySQL server has gone away') error. This was not the case in django 1.5.1

I've noticed this error when using workers that run the django code (using gearman).

To reproduce:

Set the timeout to low value by editing /etc/mysql/my.cnf add the following under [mysqld]

wait_timeout = 10
interactive_timeout = 10

Then

% python manage.py shell

>>> # access DB 
>>> import django.contrib.auth.models
>>> print list(django.contrib.auth.models.User.objects.all())
>>> import time
>>> time.sleep(15)
>>> print list(django.contrib.auth.models.User.objects.all())

Now you get the error.

Simple solution I found on the web is to call django.db.close_connection() before the access

>>> import django.db
>>> django.db.close_connection()
>>> print list(django.contrib.auth.models.User.objects.all())

works ok.

  • 1
    this is the solution that i like but django.db.close_connection does not exist in a new versions – Oleg Aug 29 '18 at 06:31
  • For me this is the best answer. I am using `db.connections['default'].close()` in newer Django version. – andrzej1_1 Oct 27 '20 at 07:28
3

We noticed this too. The answer above of setting CONN_MAX_AGE to something less than MySQL / MariaDB's wait_timeout works - for the web.

For long running tasks however this doesn't seem to work. Instead we wrapped it and close the connect each time one of our long running tasks gets executed.

We combine this with our own custom pool. Take it or leave it - the default Django one has zero control - not something we liked in production. We set a max pool to kill the server before it kills the DB with too many connections. Use it as a decorator for your tasks:

@close_db_connection()
def task_do_something():
    print 'Hello'


'''
Created on Dec 23, 2017

@author: Kevin
'''
from functools import wraps

def close_db_connection(ExceptionToCheck=Exception, raise_exception=False, notify=False):
    """Close the database connection when we're finished, django will have to get a new one..."""
    def deco_wrap(f):
        @wraps(f)
        def f_wrap(*args, **kwargs):
            try:
                return f(*args, **kwargs)
            except Exception as e:
                raise e
            finally:
                from django.db import connection; 
                connection.close();

        return f_wrap
    return deco_wrap
Shiva Krishna Bavandla
  • 25,548
  • 75
  • 193
  • 313
Kevin Parker
  • 16,975
  • 20
  • 76
  • 105
3

SHORT:

  1. pip install mysql_server_has_gone_away

  2. settings.py:

DATABASES = {
    'default': {
        'ENGINE': 'mysql_server_has_gone_away'
    }
}

TL;DR

In my case I was using django ORM with long living request (webocket). CONN_MAX_AGE doesn't work in my case.

At the beginning I created a wrapper which would try catch the error, but because of django lazy-loading it's just not clear which things you should wrap it for. So I end up duplicating this code in across the project which was a kind of pain in the ass. Instead of writing e.g. User.objects.get(id=3) I would do do_db(User.objects.get, id=3) and the db was try: return callback(*args, **kwargs); catch e: conn.close(); callback(*args, **kwargs).

When digging into django backend we can migrate this solution on connection level. Thus every query that goes to db will be wrapped with it:

settings.py:

DATABASES = {
    'default': {
        'ENGINE': 'lol'
    }
}

lol/base.py:

""" 
https://stackoverflow.com/a/60894948/3872976
"""

from django.db.backends.mysql import base


def check_mysql_gone_away(db_wrapper):
    def decorate(f):
        def wrapper(self, query, args=None):
            try:
                return f(self, query, args)
            except (base.Database.OperationalError, base.Database.InterfaceError) as e:
                if 'MySQL server has gone away' in str(e):
                    db_wrapper.connection.close()
                    db_wrapper.connect()
                    self.cursor = db_wrapper.connection.cursor()
                    return f(self, query, args)
                # Map some error codes to IntegrityError, since they seem to be
                # misclassified and Django would prefer the more logical place.
                if e.args[0] in self.codes_for_integrityerror:
                    raise base.utils.IntegrityError(*tuple(e.args))
                raise
        return wrapper

    return decorate


class DatabaseWrapper(base.DatabaseWrapper):

    def create_cursor(self, name=None):

        class CursorWrapper(base.CursorWrapper):

            @check_mysql_gone_away(self)
            def execute(self, query, args=None):
                return self.cursor.execute(query, args)

            @check_mysql_gone_away(self)
            def executemany(self, query, args):
                return self.cursor.executemany(query, args)

        cursor = self.connection.cursor()
        return CursorWrapper(cursor)

You should pay attention, that you're gonna have issues with transactions if you mysql disconnects during the atomic operation. But unfortunately there's no other way around it.

deathangel908
  • 8,601
  • 8
  • 47
  • 81
2

Perhaps the timeout is an issue for some but I ran into this issue when trying to write a very large BLOB field. I resolved it by increasing the max allowed packet size in the mysql configuration file...

max_allowed_packet=4M

Don't forget to restart mysql after the change to /etc/my.cnf. This page helped...

http://dev.mysql.com/doc/refman/5.5/en/packet-too-large.html

Paul Kenjora
  • 1,914
  • 18
  • 20
1

I was facing same problem where is my solution. I am new in Django and I am already late here but I am posting my solution. May be it will help someone.

DATABASES = {
'default': {
    'ENGINE': 'django.db.backends.mysql',
    'NAME': 'your_dabase',
    'USER': 'your_user',
    'PASSWORD': 'your_password',
    'HOST': 'your_host',
    'PORT': 'your_port',
    'CONN_MAX_AGE': 290,
},
'OPTIONS': {
    'timeout':20,
    } 

}

I added CONN_MAX_AGE and OPTIONS and it is working perfectly now.

1

I know that this an old question but the solution in my case wasn't in any of the current answers.

The root of my problem was in the server side. By checking the server timeout settings (using show global variables like '%timeout') I found out that the wait_timeout variable was set to 120 seconds, so any task that would take this long would throw the 'server has gone away' exception when I tried to retrieve or save data. And adding CONN_MAX_AGE didn't make any difference.

Solution #1 - change the server's wait_timeout setting (in case you have permission to do it)

Use set global wait_timeout=1*60*60 in your server (time is given in seconds).

Solution #2 - force a new connection after db idle time

In your Django view/task:

from django.db import connection
from django.db.utils import OperationalError

cursor = connection.cursor()
cursor.execute('SELECT 1')  # <-- no issues here

# time consuming code...

try:
    cursor.execute('SELECT 1')
except OperationalError:
    connection.connect()
    cursor = connection.cursor()
    cursor.execute('SELECT 1')

Obs.:
This should also work with ORM requests
Django version 3.1.6

jpakes
  • 33
  • 1
  • 4
0

TO fix OperationalError: (2006, 'MySQL server has gone away') for Django >= 4.1 you can update Django settings

  1. Use persistent connections to database and set CONN_MAX_AGE
DATABASES = {
   'default': {
      ....
      'CONN_MAX_AGE': 60,  # value must be less then your Mysql `interactive_timeout`/`wait_timeout`
}}
  1. Enable health check for persistent connections reuse and set CONN_HEALTH_CHECKS
DATABASES = {
   'default': {
      ....
      'CONN_HEALTH_CHECKS': True,
}}

As result for each db request Django will ping database to check "if connection is alive" (DatabaseWrapper.is_usable) and if not Django will close connection

pymen
  • 5,737
  • 44
  • 35