6

I am using SQLAlchemy with two MySQL databases. One of these being my development database hosted locally on my machine and the other is a MySQL server provided by ClearDB on Heroku for production.

I have a long running session open with the database while it performs a syncing operation with another service. On my local machine this finishes fine but on production I get the error (2013, 'Lost connection to MySQL server during query').

I've read other posts that say it can either be the size of the request is too big or the pool refresh variable that needs to be adjusted. I don't believe the transaction payload is relatively that large and setting a pool_recycle variable when calling the SQLAlachemy create_engine hasn't seemed to work.

Has anyone else experienced this problem or able to help me narrow down what the underlying reason is for that error - it seems like a catch all and I'm not sure where to go from here.

As requested in comments, both systems return the same values for select @@interactive_timeout, @@wait_timeout: 28800, 28800.

Thanks

freebie
  • 2,161
  • 2
  • 19
  • 36
  • 1
    Please issue this SQL query on both your dev database and the production database. `select @@interactive_timeout, @@wait_timeout` . Please [edit] your question to tell us what values you have on your two databases. Sometimes production databases have shorter timeouts than dev databases. – O. Jones Nov 17 '16 at 19:13
  • Ah I did mean to include these numbers when I started posting this... but forgot. I've updated my question. Thanks @O.Jones – freebie Nov 17 '16 at 19:25
  • Did you select the variables in an interactive client session? If so, you need to do `SELECT @@global.interactive_timeout, @@global.wait_timeout`. In an interactive session the session-level `wait_timeout` is adjusted to the `interactive timeout`, so it's useless. – elenst Nov 25 '16 at 22:37
  • Hi @elenst, not too sure what you meant by this. I am using Sequel Pro to make these queries (if that helps). I tried them again with global and got the same results I posted above :/. – freebie Nov 25 '16 at 22:52
  • If you do a ping on the server at the begining of the session and after the connection has timed out, do you get the same IP address? Might be a "routing" issue. – Mathieu de Lorimier Nov 30 '16 at 16:24
  • "I have a long running session open with the database while it performs a syncing operation with another service" can you clarify this. Any code? Any query you are running? – e4c5 Dec 01 '16 at 10:58
  • Hi @e4c5, sorry I've been meaning to answer this question myself. Thanks for posting a comment and reminding me. – freebie Dec 01 '16 at 11:59

1 Answers1

3

Even though the two databases appear to be using the same timeouts and config, generally. It turned out to be a timeout performed somewhere else by ClearDB.

ClearDB monitor connections and kill them when they are open for over a minute. I wasn't originally able to find this docuemnted.

The fix was actually setting the pool_recycle parameter to pool_recycle=60 on creating the engine. My previous attempt at this I was using an arbitrary number (because I didn't know ClearDB's timeout) higher than this.

freebie
  • 2,161
  • 2
  • 19
  • 36
  • Just for anyone who is arriving and wants to see it straight up: `create_engine('mysql://user:pass@host/database', pool_recycle=60, convert_unicode=True)` – tandy May 03 '18 at 03:09
  • @tandy where do we put that code? – cheese Oct 24 '21 at 03:23
  • @cheese -- I believe where you are creating your database connection engine.. I'm not sure what your project looks like though – tandy Oct 25 '21 at 04:18