0

I am solving a meta-learning problem using Reptile Algorithm as used here. I have two datasets. One contains the following classes: iris, pupil, and sclera along with their annotations. Another contains classes as follows: iris, pupil, sclera, and blood vessels along with their annotations. How to combine both datasets efficiently to train my meta-learning model?

The problem is for the first dataset we don't have annotations for blood vessels, so we can't simply merge both of them and treat it as a single dataset.

How should I tackle this problem? (please point towards some reference if possible)

I am wondering if is it a good approach to consider all of the classes from both datasets as separate subtasks to train the meta-learning model.

Na462
  • 11
  • 2
  • You probably want to ask this question on the sister site https://datascience.stackexchange.com/ or https://stats.stackexchange.com/ . You most likely will receive better answers there. – KDecker Nov 10 '22 at 17:24
  • Could you use an ensemble model with two sub models, where each is trained on one of the datasets, concatenate their results, then train on those results? – Djinn Nov 10 '22 at 18:00
  • @Djinn Could you please elaborate on ''concatenate result and train on those results''? – Na462 Nov 13 '22 at 10:10
  • For example, if you have a Dense layer (or Flatten/Reshape) on the submodels with shape `(256,)` and `(128,)`, you can concatenate them and further train on that resulting shape `(384,)`. It'd essentially act as a layer that you can see the input and output tensor shapes. – Djinn Nov 13 '22 at 11:31
  • If I am not wrong, your idea works if we train multiple models on different datasets, and maybe concatenate their bottleneck layers to get the concatenated result. Meaning one big model is made by aggregating smaller models. But how would you train it, i.e. on the output layer? – Na462 Nov 15 '22 at 13:55

0 Answers0