-1

I am working with fine tuning on GPT 3.5. I have done a fairly good training (200 examples over 3 epochs). The model I am looking for basically receives a chunk of text with a question which should be answered with true or false. All my examples follow that schema. However, the fine tuned model refuses to follow the examples. Sporadically it gives a logical answer, but most of the times it answers "false \n\n -------- \n\n\n" adding random words here and there.

I have retrained the model a couple of times changing the output format to {"result": boolean} or something similar, but the problem remains.

How this could be fixed?

desertnaut
  • 57,590
  • 26
  • 140
  • 166

0 Answers0