# Markov Networks
### Key ideas
* Our goals is almost always to learn a joint distribution, $P(X)$. Once we have a joint distribution, inference tends to be straightforward.
* In order to learn the joint distribution, we introduce *factors*. Factors allows us to capture the *affinity* between connected random variables.
* In order to learn the factors, we must introduce *parameters*. See more [here](https://towardsdatascience.com/markov-networks-undirected-graphical-models-dfb19effd8cb) and [here](https://www.youtube.com/watch?v=m-W0gLpOT94&list=PL3pGy4HtqwD2kwldm81pszxZDJANK3uGV&index=132).
* We learn these parameters in a way such that some *objective function* is satisfied, see [here](https://youtu.be/H3yWjhthGw0?list=PL3pGy4HtqwD2kwldm81pszxZDJANK3uGV&t=598)
* The selection of our parameters are a *modeling choice*.
### Restricted Boltzman Machine
* Goal: learn an abstract representation of our observed random variable space
* Gibbs distribution: see [here](https://youtu.be/m-W0gLpOT94?list=PL3pGy4HtqwD2kwldm81pszxZDJANK3uGV&t=1265)
### Random notes
* When dealing with latent variables, we never know what they actually represent (see [here](https://youtu.be/lXrFX3vjtjQ?list=PL3pGy4HtqwD2kwldm81pszxZDJANK3uGV&t=1296))
---
Date: 20210622
Links to: [003-Data-Science-MOC](003-Data-Science-MOC.md)
References:
* [Markov Network, NPTEL course](https://www.youtube.com/watch?v=xlqn-hfrqeU&list=PL3pGy4HtqwD2kwldm81pszxZDJANK3uGV&index=131&t=10s)
* [Graphical Models: A Combinatorial and Geometric Perspective - Lecture 1](http://www.fields.utoronto.ca/talks/Graphical-Models-Combinatorial-and-Geometric-Perspective-1)
* [Graphical Models: A Combinatorial and Geometric Perspective - Lecture 2](http://www.fields.utoronto.ca/talks/Graphical-Models-Combinatorial-and-Geometric-Perspective)
* [Graphical Models: A Combinatorial and Geometric Perspective - Lecture 3](http://www.fields.utoronto.ca/talks/Graphical-Models-Combinatorial-and-Geometric-Perspective-0)