There are several issues to think about here - is there going to be a single process using your API key at any one time, or is it possible that multiple processes would be running at once? If you have multiple delayed_job
workers, I think the latter is likely. I haven't used delayed_jobs
enough to give you a good solution to that, but my feeling is you would be restricted to a single worker.
I am currently working on a similar problem with an API with a restriction of 1 request every 0.5 seconds, with a maximum of 1000 per day. I haven't worked out how I want to track the per-day usage yet, but I've handled the per-second restriction using threads. If you can frame the restriction as "1 request every 0.2 seconds", that might free you up from having to track it on a minute-by-minute basis (though you still have the issue of how to keep track multiple workers).
The basic idea is that I have an request method that splits a single request into a queue of request parameters (based on the maximum number of objects allowed per request by the api), and then another method iterates over that queue and calls a block which sends the actual request to the remote server. Something like this:
def make_multiple_requests(queue, &block)
result = []
queue.each do |request|
timer = Thread.new { sleep REQUEST_INTERVAL }
execution = Thread.new { result << yield(request) }
[timer, execution].each(&:join)
end
result
end
To use it:
make_multiple_requests(queue) do |request|
your_request_method_goes_here(request)
end
The main benefit here is that if a request takes longer than the allowed interval, you don't have to wait around for the sleep
to finish, and you can start your next request right away. It just guarantees that the next request won't start until at least the interval has passed. I've noticed that even though the interval is set correctly, I occasionally get an 'over-quota' response from the API. In those cases, the request is retried after the appropriate interval has passed.