0

I am currently training a model and have saved the checkpoints for the LoRA adapters. I now have the .bin and .config file for the adapters. How do I reload everything for inference without pushing to huggingFace? Most of the documentation talks about pushing to huggingFace. I was not able to find anything regarding working with local files.

I tried lora_config = LoraConfig.from_pretrained('/path/to/adapter') model = get_peft_model(model, lora_config) But this did not work

ASierra
  • 11
  • Take a look at https://stackoverflow.com/questions/76459034/how-to-load-a-fine-tuned-peft-lora-model-based-on-llama-with-huggingface-transfo/76469875#76469875 – alvas Aug 12 '23 at 03:47

0 Answers0