4

I am not able to import LLaMATokenizer

Any solution for this problem?

I am using the code of this repo. https://github.com/zphang/transformers/tree/llama_push and trying to load the models and tokenizer using

tokenizer = transformers.LLaMATokenizer.from_pretrained("./weights/tokenizer/")
model = transformers.LLaMAForCausalLM.from_pretrained("./weights/llama-7b/")

which results in the following error:

ImportError: cannot import name 'LLaMATokenizer' from 'transformers'

ScrapperMaster
  • 47
  • 1
  • 1
  • 3

3 Answers3

7

To complement cronoik answer (that is the correct answer):

If you still having problems with from transformers import LlamaForCausalLM, LlamaTokenizer try to install the package directly from github:

pip install git+https://github.com/huggingface/transformers

also don't forget to change the Tokenizer config file from LLaMATokenizer to LlamaTokenizer.

source: https://github.com/huggingface/transformers/issues/22222

2

The second L and MA are lowercased in the class names: LlamaTokenizer and LlamaForCausalLM

from transformers import LlamaForCausalLM, LlamaTokenizer

model_id = "my_weights/"

tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = LlamaForCausalLM.from_pretrained(model_id, 

One quick way is to figure the right case for the variables is going to the commits and doing a ctr+F on the browser, https://github.com/huggingface/transformers/compare/main...zphang:transformers:llama_push

alvas
  • 115,346
  • 109
  • 446
  • 738
cronoik
  • 15,434
  • 3
  • 40
  • 78
  • 1
    Interesting feature suggestion: Why don't the import error message suggest fixes for objects that spells similar/the same but of different case? – alvas Apr 02 '23 at 00:27
  • 1
    https://stackoverflow.com/q/75909708/610569 =) – alvas Apr 02 '23 at 01:42
  • Thanks I resolved the issue with my local environment. But when i try to pass the git command in sagemaker training job to execute Specific PR that allows me to use weights. i get errors. is there any different way to install packages in sagemaker training job? – ScrapperMaster Apr 02 '23 at 22:55
  • @ScrapperMaster please create a separate question. – cronoik Apr 03 '23 at 06:05
0

Using the correct imports will make this go away. Use the below code to run: Use the reference here: https://huggingface.co/docs/transformers/main/en/model_doc/llama

from transformers import LlamaForCausalLM, LlamaTokenizer
model_id = "/root/models/models_hf/7B/"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = LlamaForCausalLM.from_pretrained(model_id)

prompt = "I am looking for a good phone"
inputs = tokenizer(prompt, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids, max_length=50)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

## Output
## 'I am looking for a good phone that is not too expensive. I am not looking for a smart phone. I am looking for a phone that is easy to use and has a good camera. I am looking for a phone that is not too'
s510
  • 2,271
  • 11
  • 18