11

SQLAlchemy (0.9.8) and mysql-5.6.21-osx10.8-x86_64 and MAC OS X 10.3.3 (Yosemite)

I keep getting intermittent:

InterfaceError: (InterfaceError) 2013: Lost connection to MySQL server during query u'SELECT..... '

I have read up a few thread and most cases are resolved by adding this to my.cnf

   max_allowed_packet = 1024M

which should be more than big enough for what I tried to do. After doing this, I step hit it intermittently. And putting this line in /etc/my.cnf:

   log-error = "/Users/<myname>/tmp/mysql.err.log"
   log-warnings = 3

I am hoping to get more details, but all I see is something like this:

   [Warning] Aborted connection 444 to db: 'dbname' user: 'root' host: 'localhost' (Got an error reading communication packets)

I have reached a point where i think more detail better logging may help, or if there's something else i could try before this.

Thanks.

kawingkelvin
  • 3,649
  • 2
  • 30
  • 50

4 Answers4

6

looks like your MySQL connection is timing out after a long period of inactivity, I bet it won't happen if you're constantly querying your DB with existing settings. There are couple of settings on both MySQL and sql sides which should resolve this issue:

  1. check your SQLa engine's pool_recycle value, try different / smaller value, e.g. 1800 (secs). If you're reading DB settings from file, set it as

    pool_recycle: 1800

otherwise specify it during engine init, e.g.

from sqlalchemy import create_engine
e = create_engine("mysql://user:pass@localhost/db", pool_recycle=1800)
  1. check / modify your wait_timeout MySQL variable, see https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_wait_timeout which is the number of seconds the server waits for activity on a noninteractive connection before closing it. e.g.

    show global variables like 'wait_timeout';

find a combination that works for your environment.

MOCKBA
  • 1,680
  • 11
  • 19
  • 3
    I don't think my case is due to time out. I got this error during a continuous requests made to the db, so my code is constantly querying the db. I suspect it is due to "buffer" related issue 'cos my "cumulative" result sets byte size can be quite large, although individual request result set may not necessarily be large. – kawingkelvin Apr 26 '15 at 22:18
  • 1
    I am also getting the same issue, I am running a cronjob for every hour at 10th minute. for this, I am making the connection but every 10th hour it is showing the 2013 connection lost error any suggestion ? – venkatesh Jun 15 '20 at 09:00
  • 1
    This solution has never worked for me. I have set pool_recycle and pool_pre_ping, but it never fixed the issue. – nicbou Dec 02 '20 at 08:23
  • Is there any idea how to reproduce this issue? it would be easier to test a solution if we have a reproducible scenario – ItayB Jul 01 '22 at 08:29
2

There are two params that could help, pool_recycle, pool_pre_ping.

pool_recycle decides the seconds to recycle the connection after it is inactivity. The default value of mysql is 8 hours, and the default value of sqlalchemy is -1, which means not to recycle, this is the difference, if mysql has recycled the connection and sqlalchemy did not, the Lost connection exception will be raise.

pool_pre_ping will test the connection's liveness, as my understanding, this could be used as a back-up strategy, if a connection is recycled by mysql but not recognized by sqlalchemy, sqlalchemy will do a check, and avoid to use an invalid connection.

create_engine(<mysql conn url>, pool_recycle=60 * 5, pool_pre_ping=True)
buxizhizhoum
  • 1,719
  • 1
  • 23
  • 32
0

Based on suggestions from this, this and many other articles on the internet, wrapping all my functions with the following decorator helped me resolve the "Lost Connection" issue with mariadb as the backend db. Please note that db below is an instance of flask_sqlalchemy.SQLAlchemy, but the concept will remain the same for an sqlalchemy session too.

def manage_session(f):
    def inner(*args, **kwargs):

        # MANUAL PRE PING
        try:
            db.session.execute("SELECT 1;")
            db.session.commit()
        except:
            db.session.rollback()
        finally:
            db.session.close()

        # SESSION COMMIT, ROLLBACK, CLOSE
        try:
            res = f(*args, **kwargs)
            db.session.commit()
            return res
        except Exception as e:
            db.session.rollback()
            raise e
            # OR return traceback.format_exc()
        finally:
            db.session.close()
    return inner

I also added pool_recycle of 50 seconds in Flask SQLAlchemy config, but that didnt visibly contribute to the solution.

EDIT1:

Below is a sample snippet of how it was used in the final code:

from flask_restful import Resource

class DataAPI(Resource):

    @manage_session
    def get(self):
        # Get data rows from DB

Srmsbrmnm
  • 149
  • 4
0

None of the previous solutions worked. I managed to solve it and developed a theory. I consider myself a layman in MySQL architecture so if you understand better, please complement my suggestion.

In my case I was getting this error but the query in question was not the problem. The problem was also not the query before it. What happens is that I saved the result of some previous queries in instances and I believe that this maintained a connection to the database. After a series of processing I only performed another query minutes later.

This connection I had ended up dying without warning and when trying to perform a new query mysql threw this error. For some reason increasing the connection time did not help. I noticed that making empty commits over time fixed the problem.

db.session.commit()
Mithsew
  • 1,129
  • 8
  • 20
  • 1
    In my case, i'm able to easily reproduce the issue by running a `tornado` app with `sqlaclhemy` and then at some point in time i manually restart `mysqld` - then I see the error , which after a refresh of the page - works out ( because it's being replaced by a new session/engine don't know ) the connection on how `tornado-sqlalchemy` ( or in other case `Flask-sqlalchemy` ) is to maintain the session all the time w/o recycle, and also the recycle parameter doesn't work for me ( being ignored ) – Ricky Levi Mar 26 '23 at 10:13