From what I can see in GCP's official docs: https://cloud.google.com/vertex-ai/docs/evaluation/introduction
Evaluation job comes with predefined metrics for predefined tasks. If I have a training task that's slightly more bespoke with metrics/ objective outside the scope of the available tasks/ metrics, how would I implement them?
Examples of custom metrics are MMD or KL Divergence loss etc.
My current thinking is:
- Create a custom evaluation app in a docker container
- Create a CustomJob that runs my evaluation app
- Output the evaluation artefacts (e.g. results, plots) to GCS etc?
Many thanks in advance!