The variational auto-encoder

In this chapter, we are going to use various ideas that we have learned in the class in order to present a very influential recent probabilistic model called the variational autoencoder.

Variational autoencoders (VAEs) are a deep learning technique for learning latent representations. They have also been used to draw images, achieve state-of-the-art results in semi-supervised learning, as well as to interpolate between sentences.

There are many online tutorials on VAEs. Our presentation will probably be a bit more technical than the average, since our goal will be to highlight connections to ideas seen in class, as well as to show how these ideas are useful in machine learning research.

Deep generative models

Consider a directed latent-variable model of the form

with observed , where can be either continuous or discrete, as well as latent .


Variational autoencoder applied to a face images (modeled by ). The learned latent space can be used to interpolate between facial expressions.
To make things concrete, you may think of as being an image (e.g. a human face), and as being latent factors (not seen during training) that explain features of the face. For example, one coordinate of can encode whether the face is happy or sad, another one whether the face is male or female, etc.

We may also be interested in models with many layers, e.g. . These are often called deep generative models and can learn hierarchies of latent representations. In this chapter, we will assume for simplicity that there is only one latent layer.

Learning deep generative models

Suppose now that we are given a dataset . We are interested in the following inference and learning tasks:

We are also going to make the following additional assumptions

Many interesting models fall in this class; the variational auto-encoder will be one such model.

Trying out the standard approaches

We have learned several techniques in the class that could be used to solve our three tasks. Let’s try them out.

The EM algorithm can be used to learn latent-variable models. Recall, however, that performing the E step requires computing the approximate posterior , which we have assumed to be intractable. In the M step, we learn the by looking at the entire datasetNote, however, that there exists a generalization called online EM, which performs the M-step over mini-batches. , which is going to be too large to hold in memory.

To perform approximate inference, we may use mean field. Recall, however, that one step of mean field requires us to compute an expectation whose time complexity scales exponentially with the size of the Markov blanket of the target variable.

What is the size of the Markov blanket for ? If we assume that at least one component of depends on each component of , then this introduces a V-structure into the graph of our model (the , which are observed, are explaining away the differences among the ). Thus, we know that all the variables will depend on each other and the Markov blanket of some will contain all the other -variables. This will make mean-field intractableThe authors refer to this when they say “the required integrals for any reasonable mean-field VB algorithm are also intractable”. They also discuss the limitations of EM and sampling methods in the introduction and the methods section. . An equivalent (and simpler) explanation is that will contain a factor , in which all the variables are tied.

Another approach would be to use sampling-based methods. In the VAE paper, the authors compare against these kinds of algorithms, but they find that they don’t scale well to large datasets. In addition, techniques such as Metropolis-Hastings require a hand-crafted proposal distribution, which might be difficult to choose.

Auto-encoding variational Bayes

We will now going to learn about Auto-encoding variational Bayes (AEVB), an algorithm that can efficiently solve our three inference and learning tasks; the variational auto-encoder will be one instantiation of this algorithm.

AEVB is based on ideas from variational inference. Recall that in variational inference, we are interested in maximizing the evidence lower bound (ELBO)

over the space of all . The ELBO satisfies the equation

Since is fixed, we can define to be conditioned on . This means that we are effectively choosing a different for every , which will produce a better posterior approximation than always choosing the same .

How exactly do we optimize over ? We may use mean field, but this turns out to be insufficiently accurate for our purposes. The assumption that is fully factored is too strong, and the coordinate descent optimization algorithm is too simplistic.

Black-box variational inference

The first important idea used in the AEVB algorithm is a general purpose approach for optimizing that works for large classes of (that are more complex than in mean field). We later combine this algorithm with specific choices of .

This approach – called black-box variational inferenceThe term black-box variational inference was first coined by Ranganath et al., with the ideas being inspired by earlier work, such as the Wake-Sleep algorithm. – consists in maximizing the ELBO using gradient descent over (instead of e.g., using a coordinate descent algorithm like mean field). Hence, it only assumes that is differentiable in its parameters .

Furthermore, instead of only doing inference, we will simultaneously perform learning via gradient descent on both and (jointly). Optimization over will keep ELBO tight around ; optimization over will keep pushing the lower bound (and hence ) up. This is somewhat similar to how the EM algorithm optimizes a lower bound on the marginal likelihood.

The score function gradient estimator

To perform black-box variational inference, we need to compute the gradient

Taking the expectation with respect to in closed form will often not be possible. Instead, we may take Monte Carlo estimates of the gradient by sampling from . This is easy to do for the gradient of : we may swap the gradient and the expectation and estimate the following expression via Monte Carlo

However, taking the gradient with respect to is more difficult. Notice that we cannot swap the gradient and the expectation, since the expectation is being taken with respect to the distribution that we are trying to differentiate!

One way to estimate this gradient is via the so-called score function estimator:

This follows from some basic algebra and calculus and takes about half a page to derive. We leave it as an exercise to the reader, but for those interested, the full derivation may found in Appendix B of this paper.

The above identity places the gradient inside the expectation, which we may now evaluate using Monte Carlo. We refer to this as the score function estimator of the gradient.

Unfortunately, the score function estimator has an important shortcoming: it has a high variance. What does this mean? Suppose you are using Monte Carlo to estimate an expectation whose mean is 1. If your samples are and are close to 1, then after a few samples, you will get a good estimate of the true expectation. If on the other hand you sample zero 99 times out of 100, and you sample 100 once, the expectation is still correct, but you will have to take a very large number of samples to figure out that the true expectation is actually one. We refer to the latter case as being high variance.

This is the kind of problem we often run into with the score function estimator. In fact, its variance is so high, that we cannot use it to learn many models.

The key contribution of the VAE paper is to propose an alternative estimator that is much better behaved. This is done in two steps: we first reformulate the ELBO so that parts of it can be computed in closed form (without Monte Carlo), and then we use an alternative gradient estimator, based on the so-called reparametrization trick.

The SGVB estimator

The reformulation of the ELBO is as follows.

It is straightforward to verify that this is the same using ELBO using some algebra.

This reparametrization has a very interesting interpretation. First, think of as an observed data point. The left-hand side consists of two terms; both involve taking a sample , which we can interpret as a code describing . We are also going to call the encoder.

In the first term, is the log-likelihood of the observed given the code that we have sampled. This term is maximized when assigns high probability to the original . It is trying to reconstruct given the code ; for that reason we call the decoder network and the term is called the reconstruction error.

The second term is the divergence between and the prior , which we will fix to be a unit Normal. It encourages the codes to look Gaussian. We call it the regularization term. It prevents from simply encoding an identity mapping, and instead forces it to learn some more interesting representation (e.g. facial features in our first example).

Thus, our optimization objective is trying to fit a that will map into a useful latent space from which we are able to reconstruct via . This type of objective is reminiscent of auto-encoder neural networksAn auto-encoder is a pair of neural networks that are composed as . They are trained to minimize the reconstruction error . In practice, learns to embed in a latent space that often has an intuitive interpretation. . This is where the AEVB algorithm takes its name.

The reparametrization trick

As we have seen earlier, optimizing our objective requires a good estimate of the gradient. The main technical contribution of the VAE paper is a low-variance gradient estimator based on the reparametrization trick.

Under certain mild conditions, we may express the distribution as the following two-step generative process.

For many interesting classes of , it is possible to choose a , such that will be distributed according to .

Gaussian variables provide the simplest example of the reparametrization trick. Instead of writing , we may write

where . It is easy to check that the two ways of expressing the random variable lead to the same distribution.

The biggest advantage of this approach is that we many now write the gradient of an expectation with respect to (for any ) as

The gradient is now inside the expectation and we may use Monte Carlo to get an estimate of the right-hand term. This approach will have a much lower varianceFor more details as to why, have a look at the appendix of the paper by Rezende et al. than the score function estimator, and will enable us to learn models that we otherwise couldn’t learn.

Choosing and

Until now, we did not specify the exact form of or , besides saying that these could be arbitrary functions. How should one parametrize these distributions? The best should be able to approximate the true posterior . Similarly, should be flexible enough to represent the richness of the data.

For these reasons, we are going to parametrize and by neural networks. These are extremely expressive function approximators that can be efficiently optimized over large datasets. This choice also draws a fascinating bridge between classical machine learning methods (approximate Bayesian inference in this case) and modern deep learning.

But what does it mean to parametrize a distribution with a neural network? Let’s assume again that and are Normal distributions; we may write them as

where are deterministic vector-valued functions of parametrized by an arbitrary complex neural network.

More generally, the same technique can be applied to any exponential family distribution by parameterizing the sufficient statistics by a function of .

The variational auto-encoder

We are now ready to define the AEVB algorithm and the variational autoencoder, its most popular instantiation.

The AEVB algorithm is simply the combination of (1) the auto-encoding ELBO reformulation, (2) the black-box variational inference approach, and (3) the reparametrization-based low-variance gradient estimator. It optimizes the auto-encoding ELBO using black-box variational inference with the reparametrized gradient estimator. This algorithm is applicable to any deep generative model with latent variables that is differentiable in .

A variational auto-encoder uses the AEVB algorithm to learn a specific model using a particular encoder . The model is parametrized as

where are parametrized by a neural network (typically, two dense hidden layers of 500 units each). The model is similarly parametrized as

This choice of and allows us to further simplify the auto-encoding ELBO. In particular, we can use a closed form expression to compute the regularization term, and we only use Monte-Carlo estimates for the reconstruction term. These expressions are given in the paper.

We may interpret the variational autoencoder as a directed latent-variable probabilistic graphical model. We may also view it as a particular objective for training an auto-encoder neural network; unlike previous approaches, this objective derives reconstruction and regularization terms from a more principled, Bayesian perspective.

Experimental results


Interpolating over MNIST digits by interpolating over latent variables
The VAE can be applied to images in order to learn interesting latent representations. The VAE paper contains a few examples on the Frey face dataset on the MNIST digits. On the face dataset, we can interpolate between facial expressions by interpolating between latent variables (e.g. we can generate smooth transitions between “angry” and “surprised”). On the MNIST dataset, we can similarly interpolate between numbers.

The authors also compare their methods against three alternative approaches: the wake-sleep algorithm, Monte-Carlo EM, and hybrid Monte-Carlo. The latter two methods are sampling-based approaches; they are quite accurate, but don’t scale well to large datasets. Wake-sleep is a variational inference algorithm that scales much better; however it does not use the exact gradient of the ELBO (it uses an approximation), and hence it is not as accurate as AEVB. The paper illustrates this by plotting learning curves.

The variational auto-encoder - Volodymyr Kuleshov