2

I'm encountering an error with my OpenAI API code and I'm not sure of the best way to resolve it. I'm using the "text-davinci-003" model to generate an AI response using the following code:

completion = openai.Completion.create(
    engine="text-davinci-003",
    prompt='\n'.join([f"{m['role']}: {m['content']}" for m in message_history]),
    temperature=0.7,
    max_tokens=1024,
    n=1,
    stop=None,
    timeout=60,
)

I get the following error:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4401 tokens (3377 in your prompt; 1024 for the completion). Please reduce your prompt; or completion length.

I'm not sure of the best way to resolve this issue. Can you give me some advice on what I should do to fix this error?

Abhishek Rai
  • 2,159
  • 3
  • 18
  • 38
helloWord
  • 21
  • 1
  • 2
  • 1
    Does this answer your question? [OpenAI GPT-3 API error: "This model's maximum context length is 4097 tokens"](https://stackoverflow.com/questions/75396481/openai-gpt-3-api-error-this-models-maximum-context-length-is-4097-tokens) – Rok Benko Mar 14 '23 at 08:08
  • 1
    https://www.youtube.com/watch?v=_vetq4G0Gsc&ab_channel=ShwetaLodha – yishairasowsky Apr 26 '23 at 13:04

0 Answers0