# Geometric Deep Learning Machine learning is about trading off three sources of error: 1. **Statistical error**: occurs due to approximating expectations on a finite sample. This grows as you increase your hypothesis space. 2. **Approximation error**: how good is your model in that hypothesis space? If your function space is too small then the one that you find will incur a lot of approximation error. 3. **Optimization error**: the ability to find a global optimum. Even if we make assumptions about our function space, such as it is Lipschitz (locally smooth), it is still too large! We want to have a way to search through the space to get anywhere. So the statistical error is cursed by the dimensionality. If we make the function space small then the search space is smaller, but the approximation error is cursed by dimensionality. So, we need to define *better function spaces to search through*, but *how*? We need to move towards a new class of function spaces, which is to say: *geometrically inspired function spaces*. Lets exploit the underlying low dimensional structure of the high dimensional input space. We can use **geometrical priors** (only allow *[equivariant functions](Equivariance.md)*, those that respect a particular geometrical principle). We should be able to do this without increasing approximation error, because we should know for sure that the true function has a certain geometrical property, which we will bias into the model. In geometric deep learning, the data lives on a **domain**. This domain is a set. It might have additional structure, such as a *neighborhood* in a graph, or it may have a *metric* such as the distance between two points in the set. Most of the time, the data isn’t the domain itself; it is a *representation*, or a *signal*, which is on a *Hilbert Space*. Recall that a **[Symmetry](Symmetry.md)** of an object is simply a transformation of that object which leaves it unchanged. There are many symmetries in deep learning. For instance, if you take two neurons in a network and you swap them, the neural network is still graph isomorphic. There are symmetries of the label function - an image of a dog is still a dog even if you apply a transformation to it. Note: this may not be the case for the number $6$ 😉. Also note: if we knew all of the symmetries of a certain class we would only need one labeled example because we would recognize any other examples as *semantically equivalent transformations*. Of course we can’t do that because the learning problem is difficult, we don’t know all of the symmetries in advance. We can talk about **invariants** or **symmetries** as the properties that remain unchanged under some class of transformations. This provided clarity by showing that different geometries could be defined by an appropriate choice of symmetry transformations, formalized by using the language of group theory. ### # Michael Bronstein - Geometric Deep Learning Talk We can think of geometry as a **space** plus some class of **transformations** (formalized using group theory), and studying properties that remain **invariant** (unchanged) under these transformations. You take an object, you apply to it rigid motions, and you preserve many things: area, parallel lines, etc. ![](geometry_transformations.png) A great example of where we would like to have invariance is in the case of **translation invariance** in an image recognition task. Consider the two examples below - by just moving the image by 1 pixel to the right, our input vector drastically changes! By we know it is effectively the same input! ![](Michael%20Bronstein%20-%20Geometric%20Deep%20Learning%20_%20MLSS%20Kraków%202023%2018-40%20screenshot.png) ![](Michael%20Bronstein%20-%20Geometric%20Deep%20Learning%20_%20MLSS%20Kraków%202023%2018-36%20screenshot.png) --- Date: 20230120 Links to: Tags: References: * https://youtu.be/w6Pw4MOzMuo * [Michael Bronstein - Geometric Deep Learning | MLSS Kraków 2023 - YouTube](https://www.youtube.com/watch?v=hROSXAY2JBc&t=8530s)