7

Lets have a look at the next snippet -

@event.listens_for(Pool, "checkout")
def check_connection(dbapi_con, con_record, con_proxy):

cursor = dbapi_con.cursor()
try:
    cursor.execute("SELECT 1")  # could also be dbapi_con.ping(),
                                # not sure what is better
except exc.OperationalError, ex:
    if ex.args[0] in (2006,   # MySQL server has gone away
                      2013,   # Lost connection to MySQL server during query
                      2055):  # Lost connection to MySQL server at '%s', system error: %d
        # caught by pool, which will retry with a new connection
        raise exc.DisconnectionError()
    else:
        raise


engine = create_engine('mysql://user:puss123@10.0.51.5/dbname', pool_recycle = 3600,pool_size=10, listeners=[check_connection])

session_factory = sessionmaker(bind = engine, autoflush=True, autocommit=False)
db_session = session_factory()

...
some code that may take several hours to run
...

db_session.execute('SELECT * FROM ' + P_TABLE + " WHERE id = '%s'" % id)        

I thought that registering the checkout_connection function under the checkout event would solve it but it didnt now the question is how am i suppose to tell SQLAlchemy handle connection dropouts so every time i call execute() it will check if connection is available and if not it will initiate it once again?

----UPDATE----

The version of SQLAlchemy is 0.7.4

----UPDATE----

def checkout_listener(dbapi_con, con_record, con_proxy):
    try:
        try:
            dbapi_con.ping(False)
        except TypeError:
            dbapi_con.ping()
    except dbapi_con.OperationalError as exc:
        if exc.args[0] in (2006, 2013, 2014, 2045, 2055):
            raise DisconnectionError()
        else:
            raise


engine = create_engine(CONNECTION_URI, pool_recycle = 3600,pool_size=10)
event.listen(engine, 'checkout', checkout_listener)
session_factory = sessionmaker(bind = engine, autoflush=True, autocommit=False)
db_session = session_factory()

session_factory is sent to every newly created thread

class IncidentProcessor(threading.Thread):

    def __init__(self, queue, session_factory):
        if not isinstance(queue, Queue.Queue):
            raise TypeError, "first argument should be of %s" (type(Queue.Queue))
        self.queue = queue
        self.db_session = scoped_session(session_factory)
        threading.Thread.__init__(self)

    def run(self):

    self.db_session().execute('SELECT * FROM ...')

    ...
        some code that takes alot of time
    ...

    self.db_session().execute('SELECT * FROM ...')

now when execute runs after a big period of time i get the "MySQL server has gone away" error

Xeus
  • 280
  • 1
  • 2
  • 14
  • Did you check if checkout event is being fired? Also did you decorate your function check_connection with @event.listens_for(Pool, "checkout")? This link may be helpful http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic – lazy functor Aug 05 '13 at 09:19
  • oh right i forgot to add the decorator now it seems like it tries to use the function but its throws an exception saying - "does not implement any of: first_connect, checkin, checkout, connect" – Xeus Aug 05 '13 at 09:24
  • What version of sqlalchemy are you using? Also could you post the edited code with decorator here? – lazy functor Aug 05 '13 at 09:30
  • I updated the source code, and the version of sqlalchemy is 0.7.4 – Xeus Aug 05 '13 at 09:39
  • Remove listeners=[check_connection] keyword parameter from create_engine call. It's not required now that you have a decorator which tells sqlalchemy that check_connection is a listener. This is the reason it expects implementation of one of these "first_connect, checkin, checkout, connect" – lazy functor Aug 05 '13 at 09:46
  • okay it works now, i'll keep you updated here in a case it will actually overcome the "mysql server has gone away" error and thanks alot!! – Xeus Aug 05 '13 at 10:00
  • @Xeus What did you change to get everything to work? I have the exact same problem (connections time out after a long period of inactivity). Would you be willing to post a "gist" of your solution? – Tom Dec 08 '14 at 23:00

3 Answers3

10

There was a talk about this, and this doc describes the problem pretty nicely, so I used their recommended approach to handle such errors: http://discorporate.us/jek/talks/SQLAlchemy-EuroPython2010.pdf

It looks something like this:

from sqlalchemy import create_engine, event
from sqlalchemy.exc import DisconnectionError


def checkout_listener(dbapi_con, con_record, con_proxy):
    try:
        try:
            dbapi_con.ping(False)
        except TypeError:
            dbapi_con.ping()
    except dbapi_con.OperationalError as exc:
        if exc.args[0] in (2006, 2013, 2014, 2045, 2055):
            raise DisconnectionError()
        else:
            raise


db_engine = create_engine(DATABASE_CONNECTION_INFO,
                          pool_size=100,
                          pool_recycle=3600)
event.listen(db_engine, 'checkout', checkout_listener)
andrean
  • 6,717
  • 2
  • 36
  • 43
  • 1
    hey i tried this but still after few hours of working the script gets to a point where it tries to perform execute('SELECT * FROM ...') and still i get the same error saying MySQL has gone away any ideas how do i overcome this ? – Xeus Aug 06 '13 at 08:38
  • is it possible you're initiating db connections not through the engine where this event handler is attached? also, I see you modified the snippet I pasted here a bit, is the current version of the snippet you use in your app the one you posted in your question? – andrean Aug 06 '13 at 08:58
  • Hey i updated the code in my post that exactly how it is being done in my code can you spot the problem? – Xeus Aug 06 '13 at 09:43
  • one thing, although I'm not sure if it's related to the problem is that you're initiating different sessions for all threads. I had a similar problem a while back, even posted a question about it: http://stackoverflow.com/questions/9619789/sqlalchemy-proper-session-handling-in-multi-thread-applications what was important is that only one db session is needed per app process for scoped_session's. try creating the scoped session outside of the thread, where the engine is created as well, then pass the result object to the threads. and do not invoke them again, just use `self.db_session.execute` – andrean Aug 06 '13 at 09:56
  • In the above code there are two `ping()` calls, and if both fail an exception is raised. Is there a way (does it make sense) to try to create a new connection instead of raising? Or is the ping sufficient to reestablish new connections if they have timed out? – Jens May 05 '15 at 12:55
  • If the two pings don't help, connection most likely cannot be established due to a real issue in connectivity – andrean May 05 '15 at 13:53
  • I used to run unit tests on an in-memory sqlite engine. So, with this type of connection I don't have `ping()`. Adding another except (`except AttributeError: return`) solved my problem. Might help someone else. – sprw Mar 04 '16 at 20:46
3

Try the pool_recycle argument to create_engine.

From the documentation:

Connection Timeouts

MySQL features an automatic connection close behavior, for connections that have been idle for eight hours or more. To circumvent having this issue, use the pool_recycle option which controls the maximum age of any connection:

engine = create_engine('mysql+mysqldb://...', pool_recycle=3600)

codeape
  • 97,830
  • 24
  • 159
  • 188
  • I tried using pool_recycle, still after 8 hours I am getting the same error. – A J Oct 31 '15 at 17:31
  • 1
    Strange. To reproduce without waiting 8 hours, try to change the MySQL config values wait_timeout and interactive_timeout to a very low number (like 10 seconds). – codeape Nov 01 '15 at 21:10
  • 1
    Yes I did the same. But the problem is solved by using flask-sqlalchemy https://pythonhosted.org/Flask-SQLAlchemy/. – A J Nov 03 '15 at 02:25
  • for me, `pool_recycle` actually raised errors more frequently and the errors were vary that i didn't know how to get out of it ... eventually i just rolled the code back – Ricky Levi Mar 08 '23 at 09:39
  • @RickyLevi Are your sessions/connections long-lived? In a web app, you typically use a session only for the duration of a single request. Flask-sqlalchemy uses a pool_recycle value of 7200, BTW. You can also try the pool_pre_ping argument to create_engine – codeape Mar 08 '23 at 10:37
  • Yes, i use `pool_pre_ping` it made things better, but still after 8 hours i receive the error, which a single page refresh fix ( for everyone ) I use `tornado` so i'm using `tornado-sqlalchemy` ... which creates and manage the session for me. `Tornado` is always alive ... it's not a CGI behavior ... always up and running, this is why i needed something that manage my connections as I couldn't do it on my own – Ricky Levi Mar 08 '23 at 10:49
  • I see. And you use the `with self.make_session() as session:`-pattern within your request handlers, ensuring the session is closed? – codeape Mar 08 '23 at 12:12
-1

You can try something like this:

while True:
    try:
        db_session.execute('SELECT * FROM ' + PONY_TABLE + " WHERE id = '%s'" % incident_id)
        break
    except SQLAlchemyError:
        db_session.rollback()

If the connection has go away, this will raise an exception, the session will be rollbackd, it'll try again is likely to succeed.

Xaqq
  • 4,308
  • 2
  • 25
  • 38
  • The actual exception that is being thrown is -OperationalError: (OperationalError) (2006, 'MySQL server has gone away') i dont think i would like to do this everytime i want to execute sql query and i'm sure SQLAlchemy gives a much more generic solution than that – Xeus Aug 05 '13 at 09:22
  • You could write a little wrapper around this, and be more specific about the caught exception. But you're right, it's likely that there is a better solution. This is more of a workaround. – Xaqq Aug 05 '13 at 09:25