0

I am trying to merge my fine-tuned adapters to the base model. With this

torch.cuda.empty_cache()
del model
pre_trained_model_checkpoint = "databricks/dolly-v2-3b"
trained_model_chekpoint_output_folder = "/content/gdrive/MyDrive/AI/Adapters/myAdapter-dolly-v2-3b/"

base_model = AutoModelForCausalLM.from_pretrained(pre_trained_model_checkpoint,
                                  trust_remote_code=True,
                                  device_map="auto"
                                  )
model_to_merge = PeftModel.from_pretrained(base_model,trained_model_chekpoint_output_folder)
del base_model
torch.cuda.empty_cache()

merged_model = model_to_merge.merge_and_unload()

tokenizer = AutoTokenizer.from_pretrained(trained_model_chekpoint_output_folder)

Then

merged_model.save_pretrained('path')

The generated model size is the aproximatly the double. (5.6Gb to 11Gb) My fine tunning basically add info about 200 examples dataset in Alpaca format.

what am I doing wrong?

RF1991
  • 2,037
  • 4
  • 8
  • 17
Hanzo
  • 1,839
  • 4
  • 30
  • 51

0 Answers0