16

I currently use a huggingface pipeline for sentiment-analysis like so:

from transformers import pipeline
classifier = pipeline('sentiment-analysis', device=0)

The problem is that when I pass texts larger than 512 tokens, it just crashes saying that the input is too long. Is there any way of passing the max_length and truncate parameters from the tokenizer directly to the pipeline?

My work around is to do:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer, device=0)

And then when I call the tokenizer:

pt_batch = tokenizer(text, padding=True, truncation=True, max_length=512, return_tensors="pt")

But it would be much nicer to simply be able to call the pipeline directly like so:

classifier(text, padding=True, truncation=True, max_length=512)
Hooked
  • 84,485
  • 43
  • 192
  • 261
EtienneT
  • 5,045
  • 6
  • 36
  • 39

3 Answers3

20

you can use tokenizer_kwargs while inference :

model_pipline = pipeline("text-classification",model=model,tokenizer=tokenizer,device=0, return_all_scores=True)

tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512,'return_tensors':'pt'}

prediction = model_pipeline('sample text to predict',**tokenizer_kwargs)

for more details you can check this link

  • Thanks. This works with regular Python. I am trying it in PySpark. Where would you place the tokenizer_kwargs - when creating the udf or when calling the udf? if you can give me an example for pyspark, I would appreciate it. Thanks. schema = ArrayType(StructType([ StructField("score", FloatType(), True), StructField("label", StringType(), True) ])) ... ... tokenizer_kwargs = {'padding': True, 'truncation': True, 'max_length': 512} sentiment_udf = F.udf(model_pipeline, schema) df = df.withColumn('pred_label', sentiment_udf(F.col("text"))) – user1717931 May 16 '23 at 13:59
11

this way should work:

classifier(text, padding=True, truncation=True)

if it doesn't try to load tokenizer as:

tokenizer = AutoTokenizer.from_pretrained(model_name, model_max_len=512)
user6110729
  • 138
  • 1
  • 8
0

This is the way:

from transformers import pipeline
generator = pipeline(task='text2text-generation', truncation=True, model=model, tokenizer=tokenizer)

# check your result
generator._preprocess_params
John Stud
  • 1,506
  • 23
  • 46