13
import discord
import openai
import os


openai.api_key = os.environ.get("OPENAI_API_KEY")

#Specify the intent
intents = discord.Intents.default()
intents.members = True

#Create Client
client = discord.Client(intents=intents)

async def generate_response(message):
    prompt = f"{message.author.name}: {message.content}\nAI:"
    response = openai.Completion.create(
        engine="gpt-3.5-turbo",
        prompt=prompt,
        max_tokens=1024,
        n=1,
        stop=None,
        temperature=0.5,
    )
    return response.choices[0].text.strip()

@client.event
async def on_ready():
    print(f"We have logged in as {client.user}")
    
@client.event
async def on_message(message):
    if message.author == client.user:
        return

    response = await generate_response(message)
    await message.channel.send(response)

discord_token = 'DiscordToken'


client.start(discord_token)  

I try to use diferent way to access the API key, including adding to enviroment variables.

What else can I try or where I'm going wrong, pretty new to programming. Error message:

openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com for details, or email support@openai.com if you have any questions.


EDIT

I solved "No API key provided" error. Now I get the following error message:

openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

Rok Benko
  • 14,265
  • 2
  • 24
  • 49
RAFA 04128
  • 139
  • 1
  • 1
  • 3
  • 1
    It seems like environment variable `OPENAI_API_KEY` is not properly set. Could you try to `print(os.environ.get("OPENAI_API_KEY"))` and see if an API key appears? – DWe1 Mar 18 '23 at 09:15
  • You probably want to use [`python-dotenv`](https://pypi.org/project/python-dotenv/) to populate your dictionary – roganjosh Mar 18 '23 at 09:17
  • 1
    Thank you using dotenv work, now Im getting the next error message "openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?" Im using gpt-3.5-turbo – RAFA 04128 Mar 18 '23 at 10:06

5 Answers5

11

Regarding openai.error.AuthenticationError: No API key provided

Change this...

openai.api_key = os.environ.get('OPENAI_API_KEY')

...to this.

openai.api_key = os.getenv('OPENAI_API_KEY')

Regarding openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint

The code you posted above would work immediately if you change just one thing: gpt-3.5-turbo to text-davinci-003. This gives you an answer as to why you're getting this error. It's because you used the code that works with the GPT-3 API endpoint, but wanted to use the GPT-3.5 model (i.e., gpt-3.5-turbo). See model endpoint compatibility.

API endpoint Model group Model name
/v1/chat/completions • GPT-4
• GPT-3.5
• gpt-4, gpt-4-0613, gpt-4-32k, gpt-4-32k-0613
• gpt-3.5-turbo, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613
/v1/completions (Legacy) • GPT base
• GPT-3
• davinci-002, babbage-002
• text-davinci-003, text-davinci-002, text-davinci-001, text-curie-001, text-babbage-001, text-ada-001, davinci, curie, babbage, ada
/v1/audio/transcriptions Whisper whisper-1
/v1/audio/translations Whisper whisper-1
/v1/fine-tunes • GPT-3.5
• GPT base
• GPT-3
• gpt-3.5-turbo-0613 (recommended)
• davinci-002, babbage-002
• davinci, curie, babbage, ada
/v1/embeddings Embeddings text-embedding-ada-002, text-similarity-*-001, text-search-*-*-001, code-search-*-*-001
/v1/moderations Moderations text-moderation-stable, text-moderation-latest

If you want to use the gpt-3.5-turbo model, then you need to write the code that works with the GPT-3.5 API endpoint (i.e., the ChatGPT API endpoint).

As you can see in the table above, there are API endpoints listed. If you're using the OpenAI package (like you are), then you need to use the appropriate function that will send your API request to the API endpoint that is compatible with your chosen OpenAI model. See the table below.

Note: OpenAI NodeJS SDK v4 was released on August 16, 2023, and is a complete rewrite of the SDK. Among other things, there are changes in method names. See the v3 to v4 migration guide.

API endpoint Python function NodeJS function (SDK v3) NodeJS function (SDK v4)
/v1/chat/completions openai.ChatCompletion.create openai.createChatCompletion openai.chat.completions.create
/v1/completions openai.Completion.create openai.createCompletion openai.completions.create
/v1/audio/transcriptions openai.Audio.transcribe openai.createTranscription openai.audio.transcriptions.create
/v1/audio/translations openai.Audio.translate openai.createTranslation openai.audio.translations.create
/v1/fine-tunes openai.FineTune.create openai.createFineTune openai.fineTunes.create
/v1/embeddings openai.Embedding.create openai.createEmbedding openai.embeddings.create
/v1/moderations openai.Moderation.create openai.createModeration openai.moderations.create

You need to adjust the whole code. See comments in the working example below.

Working example

If you run test.py the OpenAI API will return the following completion:

Hello there! How can I assist you today?

test.py

import openai
import os

openai.api_key = os.getenv('OPENAI_API_KEY')

completion = openai.ChatCompletion.create( # Change the function Completion to ChatCompletion
  model = 'gpt-3.5-turbo',
  messages = [ # Change the prompt parameter to the messages parameter
    {'role': 'user', 'content': 'Hello!'}
  ],
  temperature = 0  
)

print(completion['choices'][0]['message']['content']) # Change how you access the message content
Rok Benko
  • 14,265
  • 2
  • 24
  • 49
2

these are model endpoint for different tasks that are currently used by OPENAI.

You used engine="gpt-3.5-turbo" in Completions. instead use openai.ChatCompletion.create. or you change to other completion models.

You can find more here.model-endpoint-compatibility

enter image description here

0

The model model = 'gpt-3.5-turbo' isn't supported with the endpoint /chat/completions It needs /v1/chat/completions endpoint change you're code accordingly and it works let us know if you still have any issues You can refer to the documentation for all the various endpoints and their respective endpoints official documentation

tarunratan
  • 29
  • 1
  • 4
0

I wasn't writing a discord bot, but a console terminal application. They key difference between the GPT3 and gpt-3.5-turbo code are the role assignments.

You can make the AI respond neutral and precise, but you can also make a role-play scenario fitting your setting.

The example is elaborate, but this should provide plenty of material for people encountering the same problems, switching from the old Davinci etc. models to the new system, requiring new syntax to get the code running.

My working cyberpunk-themed example looks something like this:

import os
import openai

# Authenticate with OpenAI

os.getenv("OPENAI_API_KEY") # Remember to export OPENAI_API_KEY="your API key here" in the terminal first. 

# Define a function to prompt the user for input and generate a response
def generate_response(prompt):
    # Call the OpenAI API to generate a response
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "system", "content":"This is the year 2099.I am a cyberpunk AI. Ask me anything."},{'role': 'user', 'content': prompt}],
        max_tokens=1024,
        n=1,
        temperature=0.5,
        top_p=1,
        frequency_penalty=0.0,
        presence_penalty=0.6,
    )
    # Get the response text from the API response
    response_text = response['choices'][0]['message']['content']

    return response_text

# Start the conversation with the user
print("Welcome to a conversation with a cyberpunk AI in the year 2099!")

# Loop to continue the conversation until the user exits
while True:
    # Prompt the user for input
    prompt = input("You: ")

    # Generate a response to the user input
    response = generate_response(prompt)

    # Print the response
    print("Cyberpunk AI:", response)
0x3E7
  • 1
  • 3
  • 1
    As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Apr 19 '23 at 07:39
0

Change this:

from langchain.llms import OpenAI
llm = OpenAI(temperature=0, max_tokens=1000)

To this:

from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613", max_tokens=1000)
Eric Bellet
  • 1,732
  • 5
  • 22
  • 40
  • 2
    Answer needs supporting information Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](https://stackoverflow.com/help/how-to-answer). – moken Jul 21 '23 at 12:02