26

I am currently trying to use OpenAI's most recent model: gpt-3.5-turbo. I am following a very basic tutorial.

I am working from a Google Collab notebook. I have to make a request for each prompt in a list of prompts, which for sake of simplicity looks like this:

prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?']

I defined a function to do so:

import openai

# Load your API key from an environment variable or secret management service
openai.api_key = 'my_API'

def get_response(prompts: list, model = "gpt-3.5-turbo"):
  responses = []

  
  restart_sequence = "\n"

  for item in prompts:

      response = openai.Completion.create(
      model=model,
      messages=[{"role": "user", "content": prompt}],
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )

      responses.append(response['choices'][0]['message']['content'])

  return responses

However, when I call responses = get_response(prompts=prompts[0:3]) I get the following error:

InvalidRequestError: Unrecognized request argument supplied: messages

Any suggestions?

Replacing the messages argument with prompt leads to the following error:

InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt'
Tomerikoo
  • 18,379
  • 16
  • 47
  • 61
corvusMidnight
  • 518
  • 1
  • 4
  • 19
  • `messages` isn't the correct argument. Guess you need `prompt: []` – 0stone0 Mar 02 '23 at 15:58
  • @0stone0 the messages argument is the one provided in the documentation. However, implementing your solution leads to another error message (check the most recent **edit**) – corvusMidnight Mar 02 '23 at 16:08
  • But the prompt just need to be your question: `prompt: item` – 0stone0 Mar 02 '23 at 16:13
  • @0stone0 This leads to a different error which I believe has to do with the model (your solution would work, e.g., with a ***davinci*** model. ***InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?*** – corvusMidnight Mar 02 '23 at 16:16
  • 1
    Oke I made some code myself and can't reproduce your problem. Works fine over here. – 0stone0 Mar 02 '23 at 16:29
  • Sure you are using the latest version of the `openai` package? – 0stone0 Mar 02 '23 at 16:31

3 Answers3

42

Problem

You used the wrong function to get a completion. When using the OpenAI library (Python or NodeJS), you need to use the right function. Which is the right one? It depends on the model you want to use.

Solution

The tables below will help you figure out which function is the right one for a given OpenAI model.

First, find in the table below which API endpoint is compatible with the model you want to use.

API endpoint Model group Model name
/v1/chat/completions • GPT-4
• GPT-3.5
• gpt-4, gpt-4-0613, gpt-4-32k, gpt-4-32k-0613
• gpt-3.5-turbo, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613
/v1/completions (Legacy) • GPT base
• GPT-3
• davinci-002, babbage-002
• text-davinci-003, text-davinci-002, text-davinci-001, text-curie-001, text-babbage-001, text-ada-001, davinci, curie, babbage, ada
/v1/audio/transcriptions Whisper whisper-1
/v1/audio/translations Whisper whisper-1
/v1/fine-tunes • GPT-3.5
• GPT base
• GPT-3
• gpt-3.5-turbo-0613 (recommended)
• davinci-002, babbage-002
• davinci, curie, babbage, ada
/v1/embeddings Embeddings text-embedding-ada-002, text-similarity-*-001, text-search-*-*-001, code-search-*-*-001
/v1/moderations Moderations text-moderation-stable, text-moderation-latest

Second, find in the table below which function you need to use.

Note: OpenAI NodeJS SDK v4 was released on August 16, 2023, and is a complete rewrite of the SDK. Among other things, there are changes in method names. See the v3 to v4 migration guide.

API endpoint Python function NodeJS function (SDK v3) NodeJS function (SDK v4)
/v1/chat/completions openai.ChatCompletion.create openai.createChatCompletion openai.chat.completions.create
/v1/completions openai.Completion.create openai.createCompletion openai.completions.create
/v1/audio/transcriptions openai.Audio.transcribe openai.createTranscription openai.audio.transcriptions.create
/v1/audio/translations openai.Audio.translate openai.createTranslation openai.audio.translations.create
/v1/fine-tunes openai.FineTune.create openai.createFineTune openai.fineTunes.create
/v1/embeddings openai.Embedding.create openai.createEmbedding openai.embeddings.create
/v1/moderations openai.Moderation.create openai.createModeration openai.moderations.create

Python working example for the gpt-3.5-turbo (i.e., Chat Completions API)

If you run test.py the OpenAI API will return the following completion:

Hello there! How can I assist you today?

test.py

import openai
import os

openai.api_key = os.getenv('OPENAI_API_KEY')

completion = openai.ChatCompletion.create(
  model = 'gpt-3.5-turbo',
  messages = [
    {'role': 'user', 'content': 'Hello!'}
  ],
  temperature = 0  
)

print(completion['choices'][0]['message']['content'])

NodeJS working example for the gpt-3.5-turbo (i.e., Chat Completions API)

If you run test.js the OpenAI API will return the following completion:

Hello there! How can I assist you today?

• If you have the OpenAI NodeJS SDK v3:

test.js

// Old (i.e., OpenAI NodeJS SDK v3)
const { Configuration, OpenAIApi } = require('openai');

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

async function getChatCompletionFromOpenAI() {
  const chatCompletion = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [
      { role: 'user', content: 'Hello!' }
    ],
    temperature: 0,
  });

  console.log(chatCompletion.data.choices[0].message.content);
}

getChatCompletionFromOpenAI();

• If you have the OpenAI NodeJS SDK v4:

test.js

// New (i.e., OpenAI NodeJS SDK v4)
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

async function getChatCompletionFromOpenAI() {
  const chatCompletion = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    messages: [
      { role: 'user', content: 'Hello!' }
    ],
    temperature: 0,
  });

  console.log(chatCompletion.choices[0].message.content);
}

getChatCompletionFromOpenAI();
Rok Benko
  • 14,265
  • 2
  • 24
  • 49
  • This: I think the naming practices at OpenAI made it a bit confusing, Why would you have this in the introduction example: ```import openai openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )``` – corvusMidnight Mar 03 '23 at 07:19
  • I agree, it's a bit confusing. I think they should copy-paste the example from the documentation. – Rok Benko Mar 03 '23 at 08:30
  • This is Pytthon, right? What's the nodejs equivalent? I had `const completion = await openai.createCompletion({....})` but I get the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?" – matteo Mar 05 '23 at 21:23
2
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
  messages=[
    {"role": "user", "content": "What is openAI?"}],
max_tokens=193,
temperature=0,
)

print(response)
print(response["choices"][0]["message"]["content"])
  • 5
    This answer could benefit from literally any explanation at all. Code-only answers are rarely useful, especially as time goes on. – TylerH Mar 24 '23 at 14:26
0

You should define messages=[{"role": "user", "content": prompt}] outside of your response variable and call that in response variable like:

messages=[{"role": "user", "content": prompt}]
for item in prompts:
      response = openai.Completion.create(
      model=model,
      messages=messages,
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )