14

I'm new to APIs and I'm trying to understand how to get a response from a prompt using OpenAI's GPT-3 API (using api.openai.com/v1/completions). I'm using Postman to do so. The documentation says that there is only one required parameter, which is the "model." However, I get an error saying that "you must provide a model parameter," even though I already provided it.

What am I doing wrong?

API error screenshot

Rubén
  • 34,714
  • 9
  • 70
  • 166
David A.
  • 291
  • 1
  • 2
  • 7

4 Answers4

16

You can get this to work the following way in Postman with the POST setting:

  1. Leave all items in the Params tab empty

  2. In the Authorization tab, paste your OpenAI API token as the Type Bearer Token (as you likely already did)

  3. In the Headers tab, add key "Content-Type" with value "application/json"

  4. In the Body tab, switch to Raw, and add e.g.

     {  
         "model":"text-davinci-002",
         "prompt":"Albert Einstein was"
     }
    
  5. Hit Send. You'll get back the completions for your prompt.

Note alternatively, you can add the model into the Post URL, like https://api.openai.com/v1/engines/text-davinci-002/completions

While above works, it might not be using the Postman UI to its full potential -- after all, we're raw-editing JSON instead of utilizing nice key-value input boxes. If you find out how to do the latter, let us know.

enter image description here

Philipp Lenssen
  • 8,818
  • 13
  • 56
  • 77
4

What solved it for me was adding the content-type header: "content-type:application/json"

postman-headers

SwissCodeMen
  • 4,222
  • 8
  • 24
  • 34
Tom
  • 41
  • 2
2

You need to pay attention to the request type of the interface. If POST uses GET to request, this error will also be reported.

Better
  • 165
  • 1
  • 13
  • Thanks for the answer, yeah, this is the issue I got, the error message just confused me, it could simply to be a message like `The GET method is not supported` – tim Jun 08 '23 at 01:46
0

from

    response = openai.Completion.create(
    engine="text-davinci-003",
    prompt=prompt,
    max_tokens=100,
    n=1,
    stop=None,
    temperature=0.3,
    presence_penalty=2
    )
    answer = response.choices[0].text.strip()

to

    messages=dict(role="user", content=prompt)
    response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[messages],
    max_tokens=100,
    n=1,
    stop=None,
    temperature=0.3,
    presence_penalty=2
    )
    answer = response['choices'][0]['message']['content']
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Mar 09 '23 at 01:04