5

Im asking for support or special knowledge.

In my app Im using Google Drive and need to insert/remove permissions to 1000+ files. Google API does not allow me to set permission to multiple files in one request, so I need to make 1000+ requests (or wrap it into batch or few batches).

My requests looks like this:

POST /drive/v2/files/0B18tlN6SgYRaUzl1MUlnNHlGSEE/permissions?sendNotificationEmails=false&quotaUser=00787465305247954313&alt=json

But when Im trying to make batch request to Google Drive API, I got batch response with alot of error responses. Few first responses in batch are OK but not all.

Most of these errored responses looks like this:

Content-Type: application/http
Content-ID: <response-280781395>

HTTP/1.1 403 Forbidden
Content-Type: application/json; charset=UTF-8
Date: Tue, 09 Sep 2014 11:45:03 GMT
Expires: Tue, 09 Sep 2014 11:45:03 GMT
Cache-Control: private, max-age=0
Content-Length: 199

{
 "error": {
  "errors": [
   {
    "domain": "usageLimits",
    "reason": "rateLimitExceeded",
    "message": "Rate Limit Exceeded"
   }
  ],
  "code": 403,
  "message": "Rate Limit Exceeded"
 }
}

Some of them looks like this:

HTTP/1.1 500 Internal Server Error
Content-Type: application/json; charset=UTF-8
Date: Tue, 09 Sep 2014 11:45:03 GMT
Expires: Tue, 09 Sep 2014 11:45:03 GMT
Cache-Control: private, max-age=0
Content-Length: 180

{
 "error": {
  "errors": [
   {
    "domain": "global",
    "reason": "internalError",
    "message": "Internal Error"
   }
  ],
  "code": 500,
  "message": "Internal Error"
 }
}

And last few responses in the batch looks like this:

Content-Type: application/http
Content-ID: <response-901482964>

HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=UTF-8
Date: Tue, 09 Sep 2014 11:45:03 GMT
Expires: Tue, 09 Sep 2014 11:45:03 GMT
Cache-Control: private, max-age=0
Content-Length: 171

{
 "error": {
  "errors": [
   {
    "domain": "global",
    "reason": "badRequest",
    "message": "Bad Request"
   }
  ],
  "code": 400,
  "message": "Bad Request"
 }
}

What is strange when I got quota limit error and decided to increase requests/user/second quota in Google Drive Console. It was set to 10000/user/second. But i sent only 1000 requests in batch but already got "403 Rate Limit Exceeded" errors. And then i began to get "500 Internal Error" errors.

Then I found that i should set 'quotaUser' param to request but nothing changed. I still get these kind of errors.

Investigation in Google Drive Web UI showed me how Google Drive sets permissions to files if decided to share 1000+ files. It tooks me about 20-30 minutes before all my files was shared. In Activity panel I saw that files shared in bunches. Sometimes these bunches was 1 file sized, othertimes 40 files sized. But all files was shared in the end, looks like Google Drive team uses something like retrying queue or just not set quotasfor own requests.

Main questions is:

  • Why im still getting these quota errors (specified 10000 requests/user/second limit in Google Cloud Console) and how to avoid them?
  • Why Google Drive API returns "500 Internal Error" and "400 Bad Request" errors?
  • Any workarounds for this?

P.S.: Im interested into help Google Drive team, I can provide full traces and code samples.

Vlad Tsepelev
  • 2,056
  • 1
  • 21
  • 32
  • see http://stackoverflow.com/questions/18529524/403-rate-limit-after-only-1-insert-per-second and http://stackoverflow.com/questions/18578768/403-rate-limit-on-insert-sometimes-succeeds. It looks like a bug that Google knows about but won't fix – pinoyyid Sep 09 '14 at 15:06
  • @pinoyyid What is worked for you? Any graceful batch sizes or back-offs? – Vlad Tsepelev Sep 09 '14 at 16:16
  • I gave up on batches coz they exacerbated the problem. You would think that after sending a batch, Drive's internal flow control would process them correctly, but it no. There seems to be a bucket of 20-30 tokens, so if you are updating less than 30, you're OK. Above that figure, the token replenishment feels (anecdotally) to be approx 1 per second. So that's your effective Drive write-limit after the first 30. Backoff is a last resort - it kills throughput. You should throttle proactively to avoid it. I submit updates in a timeOut(,1500) loop and apologise to my users. – pinoyyid Sep 10 '14 at 02:59

1 Answers1

1

While you can set a per-user QPS to keep malicious users from consuming your quota, setting it to an arbitrarily high number doesn't mean you'll be able to make that many requests. There are per-app QPS limits that kick in as well. The default is well below 1000qps.

If you need additional quota, request it via the developers console.

Steve Bazyl
  • 11,002
  • 3
  • 21
  • 24
  • Ok. I got it! You say that my requests blocked by that hidden per-app QPS quota and if i ask for additional quota it will dissapear? But what should i do for 500 Internal Errors? Is it caused by quota too? – Vlad Tsepelev Sep 10 '14 at 07:48
  • Increased quota will help, though there are limits to how much can be granted. Batch sizes of 1000 are too big. I'd recommend staying < 100, even lower if you're batching updates. – Steve Bazyl Sep 10 '14 at 18:55
  • As for 500s, there are lots of things that can cause transient errors. Our recommendation for both rate limit errors and 500s is the same - slow down and retry the request after some delay. – Steve Bazyl Sep 10 '14 at 18:59