I'm trying to use text_classification pipeline from Huggingface.transformers to perform sentiment-analysis, but some texts exceed the limit of 512 tokens. I want the pipeline to truncate the exceeding tokens automatically. I tried the approach from this thread, but it did not work
Here is my code:
nlp= pipeline('sentiment-analysis',
model=AutoModelForSequenceClassification.from_pretrained(
"model",
return_dict=False),
tokenizer=AutoTokenizer.from_pretrained(
"model",
return_dict=False),
framework="pt", return_all_scores=False)
output = nlp(article)