I uploaded TF universal-sentence-encoder-qa to Vertex and want to use this model to embed some question and answer data.
This model has two signatures - question_encoder and answer_encoder
My question is how can I tell Vertex to use specific encoder when run a batch prediction?
For online prediction, I found someone given a solution Specify signature name on Vertex AI Predict
For batch prediction, I cannot find solution yet. The closest thing I can find is an article about Google AI Platform. In this article it suggests to specify signature via gcloud cli parameter. However, seems it only applies to AI Platform, but NOT for Vertex AI.
Seems that Vertex AI doesn't support gcloud cli yet (or I missed something)