In lab meeting this week, we discussed unsupervised learning in the context of deep generative models, namely -variational auto-encoders (-VAEs), drawing from the original, Higgins et al. 2017 (ICLR), and its follow-up, Burgess et al. 2018. The classic VAE represents a clever approach to learning highly expressive generative models, defined by a deep neural network that transforms samples from a standard normal distribution to some distribution of interest (e.g., natural images). Technically, VAE training seeks to maximize a lower bound on the likelihood , where defines the generative mapping from latents to data . This “evidence lower bound” (ELBO) depends on a variational approximation to the posterior, , which is also parametrized by a deep neural network (the so-called “encoder”).

A crucial drawback to the classic VAE, however, is that the learned latent representations tend to lack interpretability. The -VAE seeks to overcome this limitation by learning “disentangled” representations, in which single latents are sensitive to single generative factors in the data and relatively invariant to others (Bengio et al. 2013). I would call these “intuitively robust” — rotating an apple (orientation) shouldn’t make its latent representation any less red (color) or any less fruity (type). To overcome this challenge, -VAEs optimize a modified ELBO given by:

with and standard VAEs corresponding to . The new hyperparameter controls the optimization’s tension between maximizing the data likelihood and limiting the expressiveness of the variational posterior relative to a fixed latent prior .

Recent work has been interested in tuning the latent representations of deep generative models (Adversarial Autoencoders (Makhzani et al. 2016), InfoGANS (Chen et al. 2016), Total Correlation VAEs (Chen et al. 2019), among others), but the generalization used by -VAEs in particular looked somehow familiar to me. This is because -VAEs recapitulate the classical rate-distortion theory problem. This was observed briefly also in recent work by Alemi et al. 2018, but I would like to elaborate and show explicitly how -VAEs are reducible to a distortion-rate minimization using deep generative models.

Rate-distortion theory is a theoretical framework for lossy data compression through a noisy channel. This fundamental problem in information theory balances the minimum permissible amount of information (in bits) transmitted across the channel, the “rate”, against the corruption of the original signal, a penalty measured by a “distortion” function . Our terminology changes, but the fundamental problem is the same; I made that comparison as obvious as possible in the figure below.

**Derivation.** Given a dataset with a distribution , define any statistical mapping that encodes into a code . Note that is just an encoder, and together they induce a joint distribution with a marginal . The distortion-rate optimization would minimize distortion subject to a maximum rate , i.e.

Consider first the mutual information. We leverage a more tractable upper bound with

We’ve replaced the marginal induced by our choice of encoder with another distribution that makes the optimization more tractable, e.g. in the VAE. Our objective can be rewritten as

Suppose the distortion of interest is posterior density (mis)estimation, . Such a function penalizes representations from which we cannot regenerate an observed data vector through the decoding network with high probability. A typical distortion-rate problem would *fix* the distortion function, but we choose to learn this decoder. We can optimize the objective *for each* to eliminate the outer expectation over the data , fix , and recover the -VAE objective precisely:

When , our optimization prioritizes minimizing the second term (rate) over maximizing the first one (distortion). In this sense, the authors’ argument for large can be reinterpreted as an argument for higher-distortion, lower-rate codes (*read:* latent representations) to encourage interpretability. I edited a figure below from Alemi et al. 2018 to clarify this.

Information-theoretic hypotheses abound. Perhaps enforcing optimization in this region could discourage solutions that depend on learning an ultra-powerful decoder (*VAE:* generator) , in other words solutions that depend on a good *code*, not necessarily a good *decode*. Does eliminating this possibility simply make room to fish out an ad-hoc interpretable representation, or is there a more sophisticated explanation waiting to be found? We’ll see.