# Expectation Maximization In [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), an **expectation–maximization** (**EM**) **algorithm** is an [iterative method](https://en.wikipedia.org/wiki/Iterative_method "Iterative method") to find (local) [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") or [maximum a posteriori](https://en.wikipedia.org/wiki/Maximum_a_posteriori "Maximum a posteriori") (MAP) estimates of [parameters](https://en.wikipedia.org/wiki/Parameter "Parameter") in [statistical models](https://en.wikipedia.org/wiki/Statistical_model "Statistical model"), where the model depends on unobserved [latent variables](https://en.wikipedia.org/wiki/Latent_variable "Latent variable"). The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the [log-likelihood](https://en.wikipedia.org/wiki/Likelihood_function#Log-likelihood "Likelihood function") evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the _E_ step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. ![](EM_Clustering_of_Old_Faithful_data.gif) ### Variational Autoencoders See more in the [Variational Autoencoder](Variational%20Autoencoder.md) section for an interesting example of this approach. --- Date: 20231027 Links to: Tags: References: * [EM Algorithm: Theory & Example](https://chat.openai.com/share/809d4b3a-bbb2-497c-9412-77f26769417f)