In this week’s lab meting, I presented the following paper from Max Welling’s group:

**Auto-Encoding Variational Bayes**

Diederik P. Kingma, Max Welling

arXiv, 2013.

The paper proposed an efficient inference and learning method for directed probabilistic

models with continuous latent variables (with intractable posterior distributions), for use with large datasets. The directed graphical model under consideration is as follows,

The dataset is consisting of i.i.d. samples of some continuous or discrete variable . is an unobserved continuous random variable generating the data (solid lines: ), where is the parameter set involved in the generative model. The ultimate task is to learn both and . A general method to solve such a problem is to marginalize out to get the marginal likelihood , and maximize this likelihood to learn . However, in many application cases, e.g. a neural network with a nonlinear hidden layer, the integral is intractable. In order to overcome this intractability, sampling-based methods, e.g. Monte Carlo EM, are introduced. But when the dataset is large, batch optimization is too costly and sampling loop per datapoint is very expensive. Therefore, the paper introduced a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case.

First, they defined a recognition model : an approximation to the intractable true posterior , which is interpreted as a probabilistic encoder (dash line in the directed graph), and correspondingly, is the probabilistic decoder. Given the recognition model, the variational lower bound is defined as

In the paper’s setting,

,

Therefore, has an analytical form. The major tricky term is the expectation which usually doesn’t have any closed solution. The usual Monte Carlo estimator for this type of problem exhibits very high variance and is not capable to take derivatives w.r.t. . Given such a problem, the paper proposed a reparameterization trick of the expectation term yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods.

The key reparameterization trick constructs samples in two steps:

- (random seed independent of )
- (differentiable perturbation)

such that (the correct distribution). This yields an estimator which typically has less variance than the generic estimator:

where and

A connection with auto-encoders becomes clear when looking at the objective function. The first term is the KL divergence of the approximate posterior from the prior acts as a regularizer, while the second term is a an expected negative reconstruction error.

In the experiment, they set to be a Bernoulli or Gaussian MLP, depending on the type of data they are modeling. They presented the comparisons of their method to the wake-sleep algorithm and Monte Carlo EM on MNIST and Frey Face datasets.

Overall, I think their contributions are two-fold. First, the reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, they showed that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. The stochastic gradient method helps to parallelize the algorithm so as to improve the efficiency in largescale dataset.