I am using Huggingface Trainer API to fine-tune an ASR model, e.g. https://huggingface.co/openai/whisper-tiny
During a Callback function, I call evaluate API to calculate CER metric.
{{code-snippet-needed}} # i.e. What have you tried?
@ramraj-chandradevan, please add your code snippet here to show what you have tried.
It outputs a single CER score per validation step, e.g.
@ramraj-chandradevan, please add an example of what numbers you see at each validation step
However, for my case I am trying to get CER score for each validation example instead of a one value for entire samples set.
@ramraj-chandradevan, please add an example of the expected behavior you'll like to see during the validation step. What numbers do you see at each validation step instead of the default single value?
If I call the metric for each example, the speed is very slow, e.g.
{{code-snippet-needed}} # i.e. What have you tried when calling CER score again to get validation numbers for each example?
@ramraj-chandradevan, please add code snippet that you've tried when changing the `compute_metrics` functions that you're reporting as "very slow"
Does anyone know any way output the expected behavior of getting CER scores to each example in the during model validation?