# Abstract Vector Spaces ### Technical Definition (more intuitive) A set of vectors (points) in $n$ dimensions form a vector space if and only if the operations of *addition* and *scalar multiplication* are defined on the set. ### Technical Definition A **vector space** (also called a **linear space**) is a [set](https://en.wikipedia.org/wiki/Set_(mathematics)) of objects called [_vectors_](https://en.wikipedia.org/wiki/Vector_(mathematics_and_physics) "Vector (mathematics and physics)"), which may be [added](https://en.wikipedia.org/wiki/Vector_addition "Vector addition") together and [multiplied](https://en.wikipedia.org/wiki/Scalar_multiplication "Scalar multiplication") ("scaled") by numbers, called [scalars](https://en.wikipedia.org/wiki/Scalar_(mathematics) "Scalar (mathematics)"). Scalars are often taken to be [real numbers](https://en.wikipedia.org/wiki/Real_number), but there are also vector spaces with scalar multiplication by [complex numbers](https://en.wikipedia.org/wiki/Complex_number "Complex number"), [rational numbers](https://en.wikipedia.org/wiki/Rational_number "Rational number"), or generally any [field](https://en.wikipedia.org/wiki/Field_(mathematics) "Set (mathematics)"). ### Intuitive approach Note that the [determinant and eigenvectors](https://youtu.be/TgKwz5Ikpc8?list=PL0-GT3co4r2y2YErbmuJw2L5tW4Ew2O5B&t=89) seem indifferent to our choice of coordinate systems. * The determinant tells us how much a linear transformation scales areas * And eigenvectors stay on their own span during a transformation Both of these properties are inherently *spatial*, and we can freely change our coordinate system without changing the underlying values of either one. Note that a massive benefit to abstractness is that we get something general enough to apply to all sorts of spaces (see [here](https://youtu.be/TgKwz5Ikpc8?list=PL0-GT3co4r2y2YErbmuJw2L5tW4Ew2O5B&t=269)). We can think of linear transformations as preserving addition and scalar multiplication. Note that as long as their is a notion of *scaling* and *adding*, then our object can be considered (at least abstractly) as a vector! These are are called *vector spaces*. * Arrows in space * lists of numbers * functions And, we are allowed to then use all of the *tools* we developed to work with traditional vectors in linear algebra (linear transformations, the null space, eigenvectors, dot products, etc) to work with this abstract vectors. To come up with Vector spaces, we don't want to have to think *all* of the different vector spaces that could possible be discovered. So, we come up with a list of rules (*axioms*) that vector addition and scaling have to abide by: ![](Screen%20Shot%202021-01-28%20at%208.21.11%20AM.png) There are 8 axioms in modern linear algebra that any vector space must satisfy if all of the theory and constructs that we have discovered are going to apply. These axioms are not so much a fundamental rule of nature, as they are an *interface* between you, the mathematician discovering results, and other people who want to apply those results to new sorts of vector spaces. It is for this reason that mathematicians tend to phrase all of their results very *abstractly*, rather than on a specific type of vector space, such as an arrow or a function. For example, this is why every text book tends to define linearity in terms of [additivity and scaling](Linearity.md), rather than talking about grid lines remaining parallel and evenly spaced. So, the mathematicians answer to "what are vectors?" is to just ignore the question. In the modern theory, the form vectors take doesn't really matter. They simply need to have *properties* that allow them to satisfy the necessary axioms. ### What is the number 3? This is similar to the question: What is the number 3? It concretely will come up in the context of triplets of things. But in math, it is treated as an *abstraction* for all possible triplets of things, and lets you reason about all possible triplets using a *single idea*. The same things applies with vectors! They have many embodiments (arrows, lists of numbers, functions), but math abstracts these all into a single intangible notion of a vector space. ### Abstract Vector Space - Linear operators [Linear operators](Operators.md) are an example of an abstract vector space. ### The vector space of matrices Is it worth noting that the [set of all matrices of a fixed size forms a vector space](http://linear.ups.edu/html/section-VS.html#:~:text=So%2C%20the%20set%20of%20all,element%20of%20a%20vector%20space.)! This is because matrices satisfy the 8 axioms above that a vector space requires. >One of the key concepts of modern mathematics is that one of the best and most fruitful ways to study things is by considering structure and functions that preserve that structure. Vector spaces, for example, are sets with _structure_: you have addition with a bunch of properties, scalar multiplication with a bunch of properties. You could just take a vector space and stare at it until you figured out interesting things about it. But a better thing to do is to consider all the different ways that you can take that vector space, and either map it _to_ other vector spaces, or map from other vector spaces _to_ it. But not with arbitrary maps. Rather, you want to consider maps that take into account that you are working with vector spaces. https://math.stackexchange.com/questions/3654094/non-linear-maps-between-two-vector-spaces --- References: * [Abstract vector spaces - 3b1b](https://www.youtube.com/watch?v=TgKwz5Ikpc8&list=PL0-GT3co4r2y2YErbmuJw2L5tW4Ew2O5B&index=15&t=521s) * [Wikipedia](https://en.wikipedia.org/wiki/Vector_space) * https://math.stackexchange.com/questions/3654094/non-linear-maps-between-two-vector-spaces * [Test vector space drawign](Test%20vector%20space%20drawign.md)