2

On occasion I'm getting a rate limit error without being over my rate limit. I'm using the text completions endpoint on the paid api which has a rate limit of 3,000 requests per minute. I am using at most 3-4 requests per minute.

Sometimes I will get the following error from the api:

  • Status Code: 429 (Too Many Requests)
  • Open Ai error type: server_error
  • Open Ai error message: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists.

Open ai documentation states that a 429 error indicates that you have exceeded your rate limit which clearly I have not. https://help.openai.com/en/articles/6891829-error-code-429-rate-limit-reached-for-requests

The weird thing is the open ai error message is not stating that. It is giving the response I usually get from a 503 error (service unavailable).

I'd love to hear some thoughts on this, any theories, or if anyone else has been experiencing this.

Jared
  • 793
  • 6
  • 16
  • Possibly related: [OpenAI API giving error: 429 Too Many Requests](https://stackoverflow.com/questions/75041580/openai-api-giving-error-429-too-many-requests) – ggorlen Jun 13 '23 at 16:43

3 Answers3

1

I have seen a few message on the OpenAI community forum with similar messages. I suggest checking out the error code guide we have with suggestion to mitigate these errors, in general though, it's possible the model itself was down and has nothing to do with your rate limit: https://platform.openai.com/docs/guides/error-codes

logankilpatrick
  • 13,148
  • 7
  • 44
  • 125
1

This error indicates the OpenAI servers have too many requests from all users and their servers have reached their capacity to service your request. It's pretty common at the moment.

Hopefully they will upgrade their servers soon. Not really sure why it is a big problem since they run on Azure and should be able to scale based on ramped up demand. Maybe they are just trying to minimise costs.

Kane Hooper
  • 1,531
  • 1
  • 9
  • 21
  • 1
    "Not really sure why it is a big problem since they run on Azure and should be able to scale based on ramped up demand." - the problem is GPUs. – opyate Jun 13 '23 at 08:59
0

free trial allows you 3 requests per minute. 3 RPM

from here

enter image description here

Yilmaz
  • 35,338
  • 10
  • 157
  • 202