A Variation Autoencoder is an Autoencoder whose encodings distributions is regularised during the training in order to ensure that its latent space has good properties allowing us to generate some new data. Moreover, the term “variational” comes from the close relation there is between the regularisation and the variational inference method in statistics.
Dimensionality reduction
- The process of reducing the number of features that describe some data
- Selection(select some features)
- Extraction(reduce num of features by creating new ones based on old features)
- etc...
A Global Framework
- Encoder : process the produce the “new features” from the “old features”
- Decoder : the reverse process

- The main of a dimensionality reduction method is to find the best encoder/decoder pair among a given family.
-
For a given set of possible encoders and decoders, we are looking for the pair that keeps the maximum of information when encoding and, so, has the minimum of reconstruction error when decoding

Principal components analysis(PCA)
- build n_e new independent features that are linear combinations of the n_d old features and so that the projections of the data on the subspace defined by these new features are as close as possible to the initial data (in term of euclidean distance).

Autoencoders
Using Linear Transformations
- Looking for the best linear subspace to project data on with as few information loss as possible

Non Linear Transformation
- More complex architecture → more dimensionality reduction while keeping reconstruction loss low

Generating Content