# Dimensionality reduction intuitions
### Overview
PCA vs SVD vs. spectral embeddings vs. UMAP (how do they compare, a linear algebra vantage points, etc)
* A note on PCA: after the transformation that PCA achieves, the only thing that you can say about the resulting dimensions (eigenvectors) is that they are *independent* (due to orthogonality) and have a *high variance*. It is worth really thing about the following:
* PCA is a different way of *describing* the same underlying data. Simply, our representation is changing, *not* the actual data (we are not moving it through space or anything)
* So, PCA is akin to saying: "Let me find the most informationally dense way to represent these data points (high variance)."
* This means PCA is really about description, information. See the bullet on deep nets in [Things to Think About Deeply](Things%20to%20Think%20About%20Deeply.md).
### Key points
* Preserve structure/topology.
### Resources
* [Projection vs Embedding](Projection%20vs%20Embedding.md)
* [Word2Vec](Word2Vec.md)
* [Spectral-Embeddings](Spectral-Embeddings.md)
* [Dimensionality-Reduction](Dimensionality-Reduction.md)
* [UMAP](UMAP.md)
* [Graph Embeddings](Graph%20Embeddings.md)
* pca notes
* pca pdf tutorial from google guy
---
Date: 20210519
Links to: [Blog Post MOC](Blog%20Post%20MOC.md) [Geometry](Geometry.md)
References:
* []()