# Fourier Transform > To make one more analogy to linear algebra, the Fourier Transform of a function is just the list of components of the function in the "frequency basis"; and the normal coordinate representation of the function is its expression in the coordinate basis. > It will be useful to remember that the Fourier Transform is really just the formula for a projection, and the inverse is just a way of writing _f_ in our new basis. ### Summary - Functions can be decomposed into frequencies, much like vectors can along bases - The contribution of each frequency to a function is the projection of that function on the frequency - Projecting on a frequency is accomplished by spinning the function at that frequency, and looking at the net effect - The set of contribution amounts for each frequency in a function is called its Fourier Transform - Re-constituting a function from its frequency components is called the Inverse Fourier Transform - We can look at the coordinate representation and the frequency representation as two ways to get information about the same function; and we can look at their values as being the components in two different "bases - We can alternatively view the Fourier Transform as an abstract relationship between two functions, and transform between them to aid solving problems ### A note on wrapping out function around complex unit circle As mentioned [here](https://sites.google.com/site/butwhymath/fourier-analysis/the-fourier-transform), we can think of wrapping our function around a complex unit circle are varying frequencies. This can be slightly hard to visualize, but 3b1b clears that up [here](https://youtu.be/spUNpyF58BY?t=191). The key point to keep in mind here is that this idea really isn't that complex at all! ### Benefits * [Derivatives become multiplications in fourier space/domain](https://youtu.be/d5d0ORQHNYs?list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC&t=405) * [Convolution becomes a product in fourier space](https://youtu.be/mOiY1fOROOg?list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC&t=103) ### High Level Abstractions to always remember * The Fourier Transform is, well, a *transform*. We are specifically transforming a function $f(x)$ from the *space* domain to the *frequency* domain. In other words, we objectively have a function $f$ that exists in the world, and we are finding different ways to *describe it*. * This is similar to the sense in which we view vectors as existing objectively in space, and [we can *chose* a particular coordinate system](https://sites.google.com/site/butwhymath/fourier-analysis/the-fourier-transform) - i.e. a particular *set of basis vectors* - to represent them. * Well, we can do the same things with functions! We tend to represent $f(x)$ in coordinate space, but, due to the power of [Abstract Vector Spaces](Abstract%20Vector%20Spaces.md), because the space of functions represents a vector space, we can actually represent $f(x)$ in *another basis*, so long as it is orthogonal. * We prove (see paper notes on fourier) that $sin(kx)$ and $cos(kx)$ form an infinite dimensional orthogonal basis, and that means that we can represent our function $f(x)$ as an infinite function of sines and cosines! * To do this, we simply need to see *how much of f(x)* is in the $sin(kx)$ "direction". This is the exact same idea as when we want to represent a vector in a particular basis and we need to determine the magnitude of that vector in each basis direction. * This is accomplished via *projecting* our vector onto the particular basis. And in the case of our function, we *project* our function onto the particular basis function (ex. $sin(kx)$) via an inner product. * The reason the fourier transform is so powerful is that it may allow us to work with certain functions and processes more cleanly/easily, *just as was the case* with [change of variables](Integrals.md#Change%20of%20Variables) and [polar coordinates for certain integrals](Integration-Difficulty.md#Answer%201). * One of the biggest things to keep in mind here is that we truly are able to represent $f$ in terms of two different spaces. We can represent $f$ in our traditional $x$ space as well as in *frequency* space. We are talking about the *same* $f$, but we are simply able to describe it in different ways. Depending on our situation it may be *easier* to talk about $f$ in one way compared to the other! * My notes on the Fourier Transform (Fourier Transform - 03232021)) go into great depth on all of this. ### Interesting Note from Justin Waugh to consider > Also, as an aside and possibly related to this in a way: In a 'language' that only specifies linear functions, (eg. you are restricted via the system to only write 'linear operators'), then _some_ algebra solutions, statements, etc. are necessarily 'infinitely expensive' (eg. you'd have to unroll an infinite recursion type of thing). But, by 'extending' your language, creating new operators, new definitions, new 'language rules', you can collapse an infinity operation into a 'shorter statement' in that language, and now "talk" about a more complex concept / operation 'with low language cost'. I somewhat think of this like laplace transforms (or more generally: category stuff). Eg. solving a differential equation with purely linear operators could be really hard, but if you can laplace transform, you can use algebra, then transform back, and get a solution 'fast' that doesn't "work with infinities" like staying in e^x space does. In a way, it's like transofrming from a "non-linear language problem" to a language where the problem becomes linear, so it's 'cheap' to solve and operate on, then transform back. ### Extra Images ![](Screen%20Shot%202021-03-05%20at%208.36.05%20AM.png) ### Discrete Fourier Transform References: * [Steve Brunton DFT](https://www.youtube.com/watch?v=nl9TZanwbBk&list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC&index=15) ### Fast Fourier Transform Key points: > The FFT is based on the fundamental observation that our Discrete Fourier Transform matrix has so much symmetry that if we are clever in our reshuffling (reorganize indices), we can massively simplify our calculation, taking advantage of redundancies. We can perform this recursively, [dividing by two each time](https://youtu.be/toj_IoCQE-4?list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC&t=498), which yields a final computational complexity of $O(nlog(n))$. Again, we are **exploiting symmetries** (specifically when the number of entries in our matrix are powers of 2). Note that in the video [The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?](https://www.youtube.com/watch?v=h7apO7q16V0&t=891s), the section dealing with polynomials is implicitly making use of the fact that our polynomial represent an infinite dimensional basis, making up our "function space". For a meta conversation on the FFT check out[Algorithms and Representation](Algorithms%20and%20Representation.md). References: * [Steve Brunton FFT Algorithm 1](https://www.youtube.com/watch?v=E8HeD-MUrjY&list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC&index=17) * [Steve Brunton FFT Algorithm 2](https://www.youtube.com/watch?v=toj_IoCQE-4&list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC&index=18) * [The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?](https://www.youtube.com/watch?v=h7apO7q16V0&t=891s) ### Notes * The fourier transform is a [Unitary-Operator](Unitary-Operator.md) --- References: * [My notes on Fourier Transforms (Fourier Transform - 03232021)](https://notability.com/n/RPreQJC86BAk_3edZFLUH)) * [Fourier Transform Equation Explained](https://youtu.be/8V6Hi-kP9EE?t=224) * [The Fourier Transform - But Why? - Google sites](https://sites.google.com/site/butwhymath/fourier-analysis/the-fourier-transform) * [Properties of the Fourier Transform - But Why? - Google Sites](https://sites.google.com/site/butwhymath/fourier-analysis/properties-of-the-fourier-transform) * [Phase of complex number](https://math.stackexchange.com/questions/3141048/what-exactly-is-the-phase-of-a-complex-number/3141166) * [Steve Brunton Playlist - Great Resource](https://www.youtube.com/playlist?list=PLMrJAkhIeNNT_Xh3Oy0Y4LTj0Oxo8GqsC)