# Machine Learning and Expressiveness
Note, a key idea to start to wrap your head around, highly related to the idea of [Algorithms Exploit Constraints](Algorithms%20Exploit%20Constraints.md), is that our network is meant to provide a **structure** that is expressive enough to perform the learning task we desire (say a classification), and then we wish to have an algorithm (such as [Backpropagation](Backpropagation.md)) combined with an [Objective-Function](posts/Objective Function.md)/[Loss Function](Loss%20Function.md) to **search** and **exploit** that structure for an optimal solution. As a super crude analogy, consider a sculptor who wishes to construct a 10 ft replica of the statue of David. The sculptor has a wide variety of materials to choose from: clay, marble, stone, metals, water, air, diamond, etc. However, clearly some materials have *properties* that make them *more expressive* than others; they provide more desirable *structure* for the sculptor to work with. The sculptor is essentially "searching the space" of possible sculptures, and the material that they are working with is part of defining that space, defining how easy it is to search, and what results are possible.

---
Date: 20211203
Links to: [Machine-Learning](Machine-Learning.md) [Big Ideas MOC](Big%20Ideas%20MOC.md)
Tags:
References:
* []()