2

Getting this error: AttributeError: 'GPT2Tokenizer' object has no attribute 'train_new_from_iterator'

Very similar to hugging face documentation. I changed the input and that's it (shouldn't affect it). It worked once. Came back to it 2 hrs later and it doesn't... nothing was changed NOTHING. Documentation states train_new_from_iterator only works with 'fast' tokenizers and that AutoTokenizer is supposed to pick a 'fast' tokenizer by default. My best guess is, it is having some trouble with this. I also tried downgrading transformers and reinstalling to no success. df is just one column of text.

from transformers import AutoTokenizer
import tokenizers

def batch_iterator(batch_size=10, size=5000):
    for i in range(100): #2264
        query = f"select note_text from cmx_uat.note where id > {i * size} limit 50;"
        df = pd.read_sql(sql=query, con=cmx_uat)

        for x in range(0, size, batch_size):
            yield list(df['note_text'].loc[0:5000])[x:x + batch_size]

old_tokenizer = AutoTokenizer.from_pretrained('roberta')
training_corpus = batch_iterator()
new_tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 32000)
8bjs110
  • 109
  • 8
  • what did you think about the answer? did it help with the problem? – meti Apr 29 '22 at 05:49
  • See also https://stackoverflow.com/questions/64669365/huggingface-bert-tokenizer-add-new-token/76198096#76198096 – alvas May 08 '23 at 06:49

1 Answers1

2

There are two things for keeping in mind:

First: The train_new_from_iterator works with fast tokenizers only. (here you can read more)

Second: The training corpus. Should be a generator of batches of texts, for instance, a list of lists of texts if you have everything in memory. (official documents)

def batch_iterator(batch_size=3, size=8):
        df = pd.DataFrame({"note_text": ['fghijk', 'wxyz']})
        for x in range(0, size, batch_size):
            yield df['note_text'].to_list()

old_tokenizer = AutoTokenizer.from_pretrained('roberta-base')
training_corpus = batch_iterator()
new_tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 32000)
print(old_tokenizer( ['fghijk', 'wxyz']))
print(new_tokenizer( ['fghijk', 'wxyz']))

output:

{'input_ids': [[0, 506, 4147, 18474, 2], [0, 605, 32027, 329, 2]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]}
{'input_ids': [[0, 22, 2], [0, 21, 2]], 'attention_mask': [[1, 1, 1], [1, 1, 1]]}
meti
  • 1,921
  • 1
  • 8
  • 15