We propose a novel amortized variational inference scheme for an empirical
Bayes meta-learning model, where model parameters are treated as latent
variables. We learn the prior distribution over model parameters conditioned on
limited training data using a variational autoencoder approach. Our framework
proposes sharing the same amortized inference network between the conditional
prior and variational posterior distributions over the model parameters. While
the posterior leverages both the labeled support and query data, the
conditional prior is based only on the labeled support data. We show that in
earlier work, relying on Monte-Carlo approximation, the conditional prior
collapses to a Dirac delta function. In contrast, our variational approach
prevents this collapse and preserves uncertainty over the model parameters. We
evaluate our approach on the miniImageNet, CIFAR-FS and FC100 datasets, and
present results demonstrating its advantages over previous work.