2

I'm running PHP commandline scripts as rabbitmq consumers which need to connect to a MySQL database. Those scripts run as Symfony2 commands using Doctrine2 ORM, meaning opening and closing the database connection is handled behind the scenes. The connection is normally closed automatically when the cli command exits - which is by definition not happening for a long time in a background consumer.

This is a problem when the consumer is idle (no incoming messages) longer then the wait_timeout setting in the MySQL server configuration. If no message is consumed longer than that period, the database server will close the connection and the next message will fail with a MySQL server has gone away exception.

I've thought about 2 solutions for the problem:

  1. Open the connection before each message and close the connection manually after handling the message.
  2. Implementing a ping message which runs a dummy SQL query like SELECT 1 FROM table each n minutes and call it using a cronjob.

The problem with the first approach is: If the traffic on that queue is high, there might be a significant overhead for the consumer in opening/closing connections. The second approach just sounds like an ugly hack to deal with the issue, but at least i can use a single connection during high load times.

Are there any better solutions for handling doctrine connections in background scripts?

pulse00
  • 1,294
  • 1
  • 16
  • 25
  • possible duplicate of [Gearman, ZF2, Doctrine2, MySQL, SQLSTATE\[HY000\]: General error: 2006 MySQL server has gone away](http://stackoverflow.com/questions/28111879/gearman-zf2-doctrine2-mysql-sqlstatehy000-general-error-2006-mysql-serve) – Oleg Abrazhaev Jun 12 '15 at 08:52

3 Answers3

2

Here is another Solution. Try to avoid long running Symfony 2 Workers. They will always cause problems due to their long execution time. The kernel isn't made for that.

The solution here is to build a proxy in front of the real Symfony command. So every message will trigger a fresh Symfony kernel. Sound's like a good solution for me.

http://blog.vandenbrand.org/2015/01/09/symfony2-and-rabbitmq-lessons-learned/

flxPeters
  • 1,476
  • 12
  • 21
1

My approach is a little bit different. My workers only process one message, then die. I have supervisor configured to create a new worker every time. So, a worker will:

  1. Ask for a new message.
  2. If there are no messages, sleep for 20 seconds. If not, supervisor will think there's something wrong and stop creating the worker.
  3. If there is a message, process it.
  4. Maybe, if processing a message is super fast, sleep for the same reason than 2.
  5. After processing the message, just finish.

This has worked very well using AWS SQS.

Comments are welcomed.

amcastror
  • 528
  • 5
  • 15
0

This is a big problem when running PHP-Scripts for too long. For me, the best solution is to restart the script some times. You can see how to do this in this Topic: How to restart PHP script every 1 hour?

You should also run multiple instances of your consumer. Add a counter to any one and terminate them after some runs. Now you need a tool to ensure a consistent amount of worker processes. Something like this: http://kamisama.me/2012/10/12/background-jobs-with-php-and-resque-part-4-managing-worker/

flxPeters
  • 1,476
  • 12
  • 21
  • Thanks for your suggestions. Regarding multiple consumers: That's already the case. Termination after N messages is also implemented already. That won't fix the problem though, if let's say you terminate after 10 messages, and after the 8th you start to get idle very long. – pulse00 Dec 01 '14 at 09:53
  • Regarding the restart approach: That would kill also any worker that is already processing a message, which would result in a an unacknowledged message on the rabbitmq side and a potential error or warning message for any end-users waiting for that job to be finished. – pulse00 Dec 01 '14 at 09:55
  • Look also here, it might help : http://stackoverflow.com/questions/14572020/handling-long-running-tasks-in-pika-rabbitmq?rq=1 – Veve Dec 01 '14 at 10:55
  • @Veve thanks for the link. The problem here though is not rabbitMQ closing the connection, but MySQL doing it. Increasing / turning off the `wait_timeout` on the database server is not really an option. – pulse00 Dec 01 '14 at 13:25
  • @Mister Dood : agree, I didn't because I somehow knew it wouldn't totaly fit here. It was more like a hint, and so a comment, with not a lot of place to write. – Veve Dec 01 '14 at 13:29