Questions tagged [gpt-3]

Use this tag with Generative Pre-trained Transformer 3 (GPT-3). Do not use with GPT-2 or the ad tagging library (GPT).

References:

GPT-3 (Wikipedia)

Related tags:

296 questions
34
votes
3 answers

OpenAI API: How do I count tokens before(!) I send an API request?

OpenAI's text models have a context length, e.g.: Curie has a context length of 2049 tokens. They provide max_tokens and stop parameters to control the length of the generated sequence. Therefore the generation stops either when stop token is…
meliksahturker
  • 922
  • 2
  • 11
  • 20
23
votes
3 answers

OpenAI GPT-3 API error: "This model's maximum context length is 4097 tokens"

I am making a request to the completions endpoint. My prompt is 1360 tokens, as verified by the Playground and the Tokenizer. I won't show the prompt as it's a little too long for this question. Here is my request to openai in Nodejs using the…
Kane Hooper
  • 1,531
  • 1
  • 9
  • 21
15
votes
4 answers

ChatGPT Token Limit

I want ChatGPT to remember past conversations and have a consistent (stateful) conversation. I have seen several code of ChatGPT prompt engineering. There were two ways to design the prompt shown below (pseudo code): Use a single input (Cheap) <-…
Joohyun Lee
  • 167
  • 1
  • 1
  • 4
14
votes
4 answers

GPT-3 API invalid_request_error: you must provide a model parameter

I'm new to APIs and I'm trying to understand how to get a response from a prompt using OpenAI's GPT-3 API (using api.openai.com/v1/completions). I'm using Postman to do so. The documentation says that there is only one required parameter, which is…
David A.
  • 291
  • 1
  • 2
  • 7
11
votes
5 answers

OpenAI: Stream interrupted (client disconnected)

I'm trying OpenAI. I have prepared the training data, and used fine_tunes.create. Several minutes later, it showed Stream interrupted (client disconnected). $ openai api fine_tunes.create -t data_prepared.jsonl Upload progress:…
SoftTimur
  • 5,630
  • 38
  • 140
  • 292
9
votes
1 answer

OpenAI GPT-3 API: Fine tune a fine tuned model?

The OpenAI documentation for the model attribute in the fine-tune API states a bit confusingly: model The name of the base model to fine-tune. You can select one of "ada", "babbage", "curie", "davinci", or a fine-tuned model created after…
8
votes
2 answers

Llama_index unexpected keyword argument error on ChatGPT Model Python

I'm testing a couple of the widely published GPT models just trying to get my feet wet and I am running into an error that I cannot solve. I am running this code: from llama_index import SimpleDirectoryReader, GPTListIndex, GPTSimpleVectorIndex,…
t25
  • 167
  • 3
  • 14
8
votes
2 answers

Fine Tuning an OpenAI GPT-3 model on a collection of documents

According to the documentation https://beta.openai.com/docs/guides/fine-tuning the training data to fine tune an OpenAI GPT3 model should be structured as follows: {"prompt": "", "completion": ""} {"prompt":…
David
  • 7,652
  • 21
  • 60
  • 98
8
votes
1 answer

How do I make sure answers are from a customized (fine-tuning) dataset?

I'm using customized text with 'Prompt' and 'Completion' to train new model. Here's the tutorial I used to create customized model from my data: beta.openai.com/docs/guides/fine-tuning/advanced-usage However even after training the model and sending…
Moshe
  • 208
  • 4
  • 13
8
votes
6 answers

How do I know how much tokens a GPT-3 request used?

I am building an app around GPT-3, and I would like to know how much tokens every request I make uses. Is this possible and how ?
fabrice
  • 97
  • 1
  • 2
7
votes
1 answer

How to add 'message history' to llama-index based GPT-3 in Python

I am fairly new to using llama-index library for training GPT-3 as well as using ChatGPT through the standard API (both in Python). I have noticed that standard ChatGPT API i could simply do the following code below to have ChatGPT get message…
Lawrd_Das
  • 71
  • 3
6
votes
2 answers

OpenAI GPT-3 API error: "Request timed out"

I keep get an error as below Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600) when I run the code below def generate_gpt3_response(user_text, print_output=False): """ Query OpenAI…
opsv
  • 89
  • 1
  • 3
6
votes
1 answer

OpenAI GPT-3 API: How to extend length of the TL;DR output?

I'd like to produce a 3-6 sentence summary from a 2-3 page article, using OpenAI's TLDR. I've pasted the article text but the output seems to stay between 1 and 2 sentences only.
psone
  • 95
  • 8
6
votes
1 answer

How can I use GPT 3 for my text classification?

I am wondering if I can be able to use OpenAI GPT-3 for transfer learning in a text classification problem? If so, how can I get start on it using Tensorflow, Keras.
5
votes
2 answers

Figuring out general specs for running LLM models

I have three questions : Given count of LLM parameters in Billions, how can you figure how much GPU RAM do you need to run the model ? If you have enough CPU-RAM (i.e. no GPU) can you run the model, even if it is slow Can you run LLM models (like…
sten
  • 7,028
  • 9
  • 41
  • 63
1
2 3
19 20