0

I am using Huggingface Trainer API to fine-tune an ASR model, e.g. https://huggingface.co/openai/whisper-tiny

During a Callback function, I call evaluate API to calculate CER metric.

{{code-snippet-needed}} # i.e. What have you tried?

@ramraj-chandradevan, please add your code snippet here to show what you have tried.

It outputs a single CER score per validation step, e.g.

@ramraj-chandradevan, please add an example of what numbers you see at each validation step

However, for my case I am trying to get CER score for each validation example instead of a one value for entire samples set.

@ramraj-chandradevan, please add an example of the expected behavior you'll like to see during the validation step. What numbers do you see at each validation step instead of the default single value?

If I call the metric for each example, the speed is very slow, e.g.

{{code-snippet-needed}} # i.e. What have you tried when calling CER score again to get validation numbers for each example?

@ramraj-chandradevan, please add code snippet that you've tried when changing the `compute_metrics` functions that you're reporting as "very slow"

Does anyone know any way output the expected behavior of getting CER scores to each example in the during model validation?

alvas
  • 115,346
  • 109
  • 446
  • 738
  • Welcome to Stackoverflow, without a little more information, the other folks on Stackoverflow might flag the question as "low-quality" or "need more focus" and eventually close it. @ramraj-chandradevan, I've added some snippets that would require your input so that the questions has more information to help us help you better. Please spend some time filling in the details. – alvas Aug 12 '23 at 11:10

0 Answers0