0

It is my first NLP task, and I would like to use BART model and tokenizer from hugging face to pre_train and fine_tune.The code shown as below.

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')

As u can see, when I pretrained the model I used BertTokenizer and BertForMaskedLM model, but while pre_training, I used BartForConditionalGeneration and BartTokenizer, as a result; it has bad results.

model = BartForConditionalGeneration.from_pretrained("./checkpoints_bert/checkpoint-1000/") 
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")

So I wonder, if I would like to pre_train and fine_tune NLP model, should I using the same model and same tokenizer during these two period?

DAE pretrained and fine_tune the Bart model

VictorZhu
  • 31
  • 2

0 Answers0