I am trying to run a gpt4all model through the python gpt4all library and host it online. According to the documentation, my formatting is correct as I have specified the path, model name and downloaded the actual model on my machine.
My code:
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin",model_path="C:/Users/mhaba/AppData/Local/nomic.ai/GPT4All/", allow_download=False)
This is the error I keep getting:
PS C:\Users\mhaba\Downloads\deliver> & "C:/Program Files (x86)/Microsoft Visual Studio/Shared/Python39_64/python.exe" c:/Users/mhaba/Downloads/deliver/app/test.py
Traceback (most recent call last):
File "c:\Users\mhaba\Downloads\deliver\app\test.py", line 3, in <module>
model = GPT4All("orca-mini-3b.ggmlv3.q4_0",model_path="C:/Users/mhaba/AppData/Local/nomic.ai/GPT4All/", allow_download=False)
File "C:\Users\mhaba\AppData\Roaming\Python\Python39\site-packages\gpt4all\gpt4all.py", line 45, in __init__
self.model = GPT4All.get_model_from_name(model_name)
File "C:\Users\mhaba\AppData\Roaming\Python\Python39\site-packages\gpt4all\gpt4all.py", line 319, in get_model_from_name
raise ValueError(err_msg)
ValueError: No corresponding model for provided filename orca-mini-3b.ggmlv3.q4_0.bin.
If this is a custom model, make sure to specify a valid model_type.
I am not sure what’s causing the error.