0

I am having an issue with ChatGPT API relating to prompt engineering

I have a dataset which consists of individual product titles, and product descriptions which was awful design, but I didn't have control over that part. I need to create aggregate titles for the individual titles.

I fine-tuned the Curie model on the data in a similar way to this:

Prompt:

60cm tall Oak Drop Leaf Table|80cm tall Oak Drop Leaf Table|100cm tall Oak Drop Leaf Table|60cm tall Drop Leaf Table Material: Oak with bird design

Completion:

Oak Drop Leaf Table

I fine-tuned it on about 100 human-written titles

I am currently using settings:

$data = array(
    'model' => $model,
    'prompt' => $prompt,
    'temperature' => 0.6,
    'max_tokens' => 25,
    "top_p" => 1,
    "frequency_penalty" => 0,
    "presence_penalty" => 0.6,
);

I have varied these, but to no great effect.

I am wondering where am I going wrong?

I am getting responses like:

60cm Oak Drop Leaf TableOak Drop Leaf TableOak Drop Leaf Table

Oak Drop Leaf Table|Oak Drop Leaf Table|Oak Drop Leaf Table

Oak Drop Leaf Table,Oak Drop Leaf Table,Oak Drop Leaf Table

  • Does this answer your question? [Customize (fine-tune) OpenAI model: How to make sure answers are from customized (fine-tuning) dataset?](https://stackoverflow.com/questions/74000154/customize-fine-tune-openai-model-how-to-make-sure-answers-are-from-customized) – Rok Benko Apr 25 '23 at 07:35
  • Thanks for the link, it is good to have more information, but I know that the web playground version CAN complete these prompts but with the same prompts the API cannot for some reason, even with fine-tuning – user16861522 Apr 25 '23 at 08:37

0 Answers0