# MCMC Algorithms There is a large family of algorithms that perform MCMC. Most of these algorithms can be expressed at a high level as follows. 1. Start at the current position. 2. Propose moving to a new position (investigate a pebble near you). 3. Accept/Reject the new position based on the position’s adherence to the data and prior distributions (ask if the pebble likely came from the mountain). 4. a. If you accept: Move to the new position. Return to Step 1. b. Else: Do not move to the new position. Return to Step 1. 5. After a large number of iterations, return all accepted positions. In this way, we move in the general direction toward the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples, as they likely all belong to the posterior distribution. If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions that are likely not from the posterior but better than everything else nearby. Thus the first moves of the algorithm are not very reflective of the posterior. We’ll deal with this later. In the preceding algorithm’s pseudocode, notice that only the current position matters (new positions are investigated only near the current position). We can describe this property as memorylessness; that is, the algorithm does not care how it arrived at its current position, only that it is there. --- Date: 20220522 Links to: [Probability MOC](Probability%20MOC.md) Tags: #review References: Bayesian Methods for Hackers, Chapter 3 * []()