# Trees and Reinforcement Learning
We can start by noting that **games are geometry**. We can think of them as trees. Each **node** in the tree is a **state** of the game. For example:

For any two nodes to be a distance of one away from each other their must be a *single action* that allows for one state to transition to the other. The reason that games have the geometry of trees (a constrained [DAG](Tree-vs-DAG.md)), is because given a state you only have a certain set of states you can move to next.
Consider the tree associated with playing checkers.
> **Information** *flows* from the leaves of a tree to the root. **Action** *flows* from the root to the leaves.
As Ellenberg states:
> The tree represents the geometry of hierarchy for the same reason it represents the geometry of Nim, or the geometry of the garden of forking paths that make up our lives; there are no cycles, no infinite regress.
We can also note that the tree provides a set of **geometric constraints**.
### Summary
* A game has a *space* of configurations
* As we play we branch (a tree) into different configurations
* A single game is a *path*, the set of all paths is a *tree*. The *constraints* of the game *construct* the tree.
---
Date: 20211129
Links to: [Trees](Trees.md) [Reinforcement Learning (old)](Reinforcement%20Learning%20(old).md) [Shape](Shape.md)
Tags:
References:
* Chapter 5, Shape