11

I fine-tuned a pretrained BERT model in Pytorch using huggingface transformer. All the training/validation is done on a GPU in cloud.

At the end of the training, I save the model and tokenizer like below:

best_model.save_pretrained('./saved_model/')
tokenizer.save_pretrained('./saved_model/')

This creates below files in the saved_model directory:

config.json
added_token.json
special_tokens_map.json
tokenizer_config.json
vocab.txt
pytorch_model.bin

Now, I download the saved_model directory in my computer and want to load the model and tokenizer. I can load the model like below

model = torch.load('./saved_model/pytorch_model.bin',map_location=torch.device('cpu'))

But how do I load the tokenizer? I am new to pytorch and not sure because there are multiple files. Probably I am not saving the model in the right way?

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
nad
  • 2,640
  • 11
  • 55
  • 96

1 Answers1

15

If you look at the syntax, it is the directory of the pre-trained model that you are supposed to pass. Hence, the correct way to load tokenizer must be:

tokenizer = BertTokenizer.from_pretrained(<Path to the directory containing pretrained model/tokenizer>)

In your case:

tokenizer = BertTokenizer.from_pretrained('./saved_model/')

./saved_model here is the directory where you'll be saving your pretrained model and tokenizer.

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
Ashwin Geet D'Sa
  • 6,346
  • 2
  • 31
  • 59
  • How to make this reference to local model work in docker? I'm putting my model and tokenizer in a folder called "./saved" and I get the following error. Looks like Docker is still looking for the config, model, tokenizer files from hugging face. – neelmeg Jul 05 '21 at 17:51
  • 404 Client Error: Not Found for url: https://huggingface.co/saved/resolve/main/config.json Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 505, in get_config_dict user_agent=user_agent, File "/usr/local/lib/python3.7/site-packages/transformers/file_utils.py", line 1337, – neelmeg Jul 05 '21 at 17:51
  • any idea, how can we do same stuff in scala sparknlp implementation? I am trying to load my own tokenizer into the pipeline but keep running into compatibility issues. – teksan Apr 05 '23 at 11:26
  • Have never used Scala or Spark. Sorry about that :( – Ashwin Geet D'Sa Apr 05 '23 at 12:25