# Symmetry
Roughly speaking we can define a symmetry as:
> A **symmetry** is a *transformation* that you can apply to a thing that still leaves it the same thing.
Put slightly different (see page 24 Biggest Ideas in the Universe Space, Time and Motion, sean carrol):
> A symmetry is a transformation you can do to a system that leaves its *essential features* unchanged.
For example, consider the symmetry of [Translation](Translation.md). If I translate a triangle 3 units to the right, we generally still think of it as the same triangle. It is important to note that we *don't have to*; rather we *chose* to! You could argue that it is a different triangle, that it has a different center point, etc. But, as Jordan Ellenberg states in [Shape](Shape.md), a lot of math is figuring out what we can get away with not caring about (and by tautology determining what we should care about). In our physical reality, if I move a coffee mug one foot across my desk, I still view it as the same mug. This also highlights that we are considering symmetry with respect to *time* (my mug right now is the same mug as the one yesterday on my desk. I don't refer to one as "my mug today" and the other as "my mug from yesterday").
### In relation to group theory
A theme of math through the past 2 centuries has been that the nature of symmetry in and of itself can show us all sorts of non obvious facts about the other objects that we study.
For instance, we can think about how this relates to physics via Noether's Theorem. This specifically states that every conservation law corresponds to some kind a symmetry, a certain group.

More specifically, the actions that we should be able to apply to a setup such that the laws of physics don't change.
### In relation to Machine Learning
Consider the following quote:
> Deep Learning focuses on creating generalizations through the capture of an invariant representation. This is why, data augmentation is a best-practice approach. So when working with images, images are rotated, cropped, de-saturated etc.. This trains the network to ignore these variations. In addition, Convolution Networks are designed to ignore image translations (i.e. difference in locations). The reason DL systems require many training sets is that it needs to “see” enough variations so that it can learn what to ignore and what to continue to keep relevant. Perhaps however that the requirement for invariances is too high and we should seek something less demanding in the form of equivariances. See more [here](https://medium.com/intuitionmachine/exploration-exploitation-and-imperfect-representation-in-deep-learning-9472b67fdecd).
We can think of the above as *teaching* the neural net to *learn* the **symmetries** (rotation, cropping, de-saturation, etc) that leave the object **[invariant](Invariant.md)**
---
Date: 20210806
Links to: [Mathematics MOC](Mathematics%20MOC.md) [003-Data-Science-MOC](003-Data-Science-MOC.md) [Group-Theory](Group-Theory.md) [Geometry](Geometry.md)
Tags:
References:
* [Wikipedia, translational symmetry](https://en.wikipedia.org/wiki/Translational_symmetry)
* [Group Theory, abstraction - 3b1b](https://www.youtube.com/watch?v=mH0oCDa74tE)
* [Group theory you tube video, good intro](https://www.youtube.com/watch?v=EsBn7G2yhB8&list=PLDcSwjT2BF_VuNbn8HiHZKKy59SgnIAeO&index=1)