Understanding Variational Autoencoder


We can simply assume the encoder learn the posterior p(z|x) with a network with parameters \phi, hence the encode is a function that approximates the posterior q_{\phi}(z|x).

Decode takes a sample from latent space (when infer without encoder) z \sim N(0, 1), and map them to the space x.

We must train the encoder and decoder jointly. To achieve this, we must design a joint loss.


Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the Loop

Copyright © 2024 | WordPress Theme by: SiteNerdy