3

I have a Laravel application where the Application servers are behind a Load Balancer. On these Application servers, I have cron jobs running, some of which should only be run once (or run on one instance).

I did some research and found that people seem to favor a lock-system, where you keep all the cron jobs active on each application box, and when one goes to process a job, you create some sort of lock so the others know not to process the same job.

I was wondering if anyone had more details on this procedure in regards to AWS, or if there's a better solution for this problem?

djt
  • 7,297
  • 11
  • 55
  • 102

3 Answers3

5

You can build distributed locking mechanisms on AWS using DynamoDB with strongly consistent reads. You can also do something similar using Redis (ElastiCache).

Alternatively, you could use Lambda scheduled events to send a request to your load balancer on a cron schedule. Since only one back-end server would receive the request that server could execute the cron job.

These solutions tend to break when your autoscaling group experiences a scale-in event and the server processing the task gets deleted. I prefer to have a small server, like a t2.nano, that isn't part of the cluster and schedule cron jobs on that.

Mark B
  • 183,023
  • 24
  • 297
  • 295
  • Thanks Mark! I had just been looking into Lambda, but was previously considering making a "cron" box, which would process and fetch the records needed for a particular job, and then push them off to SQS to be handled by workers. However, I was concerned about having a single point of failure (one cron box). – djt Sep 07 '16 at 18:04
  • Lambda cron jobs inserting into SQS would definitely be better than an EC2 cron server inserting into SQS at that point. Lambda would also be cheaper. Just keep in mind that SQS does not guarantee "exactly once" message delivery. Also, with SQS you will have to have services on your servers constantly polling for new messages. – Mark B Sep 07 '16 at 18:08
  • Yeah, we currently have a couple worker servers already polling SQS, so it'd be somewhat minimal to add another worker. I'll have to experiment with Lambda and see what I can do. thanks! – djt Sep 07 '16 at 18:10
  • Good idea re small server setup for cron. Much more reliable IMHO. – Rodrigo Murillo Sep 07 '16 at 18:23
0

Check out this package for Laravel implementation of the lock system (DB implementation): https://packagist.org/packages/jdavidbakr/multi-server-event

Also, this pull request solves this problem using the lock system (cache implementation): https://github.com/laravel/framework/pull/10965

Paras
  • 9,258
  • 31
  • 55
-1

If you need to run stuff only once globally (so not once on every server) and 'lock' the thing that needs to be run, I highly recommend using AWS SQS because it offers exactly that: run a cron to fetch a ticket. If you get one, parse it. Otherwise, do nothing. So all crons are active on all machines, but tickets are 'in flight' when some machine requests a ticket and that specific ticket cannot be requested by another machine.

Luc Hendriks
  • 2,473
  • 13
  • 18
  • 1
    We're using SQS for our worker processes, but I would still need a cron job that sends the messages up to SQS to be processed, which would leave me in the same situation; If all the servers are running the cron, then duplicate messages might get sent up to SQS – djt Sep 07 '16 at 17:10
  • SQS does not guarantee exactly once message delivery. You end up having to build some sort of lock mechanism for the SQS messages anyway. http://stackoverflow.com/questions/13484845/what-is-a-good-practice-to-achieve-the-exactly-once-delivery-behavior-with-ama – Mark B Sep 07 '16 at 17:12