# Tensors, Physics Approach ### Key ideas * Geometric vectors are tensors. They have one physical axis, so they are said to be tensors of *rank 1* * A scalar is said to be a rank 0 tensor, where rank is different than dimension ### Change of basis rules (forward and backward transformations) **Forward Transformation** We can transform between old and new basis via: ![](Screen%20Shot%202021-05-04%20at%209.30.00%20AM.png) Note that above there is a slight mistake; the video author mistakenly used $F^T$ instead of $F$. **Backward Transformation** We can transform the other direction via: ![](Screen%20Shot%202021-05-04%20at%209.31.46%20AM.png) **Multiply F and B** We see multiplying together simply yields the identity matrix: ![](Screen%20Shot%202021-05-04%20at%209.32.58%20AM.png) In other words, our transformations are inverses. **Generalize to n dimensions** We start with $n$ old basis vectors, $\vec{e}_1, \dots, \vec{e}_n$, and we have $n$ new basis vectors $\vec{\tilde{e}}_1, \dots, \vec{\tilde{e}}_n$: ![](Screen%20Shot%202021-05-04%20at%209.37.18%20AM.png) To avoid needing to write out all of these equations, we can come up with the general formula: ![](Screen%20Shot%202021-05-04%20at%209.37.39%20AM.png) ![](Screen%20Shot%202021-05-04%20at%209.37.59%20AM.png) So, the final forward and backward transforms are: ![](Screen%20Shot%202021-05-04%20at%209.38.23%20AM.png) ### Vectors Vectors are our first example of a tensor. ![](Screen%20Shot%202021-05-04%20at%209.39.57%20AM.png) Above, the list of numbers are actually the vector *components*. ![](Screen%20Shot%202021-05-04%20at%209.41.04%20AM.png) ![](Screen%20Shot%202021-05-04%20at%209.41.25%20AM.png) This 3rd definition is very abstract; we are really just left with a bunch of rules. How are the below two sets of components related to each other? ![](Screen%20Shot%202021-05-04%20at%209.43.08%20AM.png) Recall the forward and backward transformations: ![](Screen%20Shot%202021-05-04%20at%209.43.52%20AM.png) Maybe the forward transformation will take our vector from the old CS to the new CS? ![](Screen%20Shot%202021-05-04%20at%209.44.28%20AM.png) Nope! That doesn't work! What if we try the backward transformation? ![](Screen%20Shot%202021-05-04%20at%209.44.48%20AM.png) ![](Screen%20Shot%202021-05-04%20at%209.45.04%20AM.png) That does work! Why exactly does it work? We need to recall [Change of Basis Physics](Change%20of%20Basis%20Physics.md). Our rules can be described as: To move from old components to new components: ![](Screen%20Shot%202021-05-04%20at%209.49.03%20AM.png) And to move from new components to old components: ![](Screen%20Shot%202021-05-04%20at%209.49.36%20AM.png) **Summary: How basis vectors transform (left) and how vectors transform (right)** ![](Screen%20Shot%202021-05-04%20at%209.51.22%20AM.png) Because vector components behave *contrary* to the basis vectors, we say that vector components are **contravariant**. To remind ourselves that the components are contravariant we will place there indices as superscripts (while the basis indices will remain subscripts): ![](Screen%20Shot%202021-05-04%20at%209.52.07%20AM.png) ### Covectors A covector (row vector) is simply a linear function (see [here](https://www.youtube.com/watch?v=LNoQ_Q5JQMY&list=PLJHszsWbB6hrkmmq57lX8BV-o-YIOFsiG&index=7)). ![](Screen%20Shot%202021-05-04%20at%208.52.33%20AM.png) We can visualize our covector as: ![](Screen%20Shot%202021-05-04%20at%208.58.53%20AM.png) Where this is very similar to a topological map. We have a location on the surface of the earth (2d coordinate pair, lat long) and we use contours to represent the height along a specific curve: ![](Screen%20Shot%202021-05-04%20at%209.00.03%20AM.png) One useful way to visualize a covector acting on a vector is show below: ![](Screen%20Shot%202021-05-04%20at%209.03.34%20AM.png) We see that we only need to count the number of lines that $\vec{v}$ pierces. Why is this? Remember, one of these lines represents a *constant* result (scalar output result of covector applied to vector) based on our covector acting on *any* vector input that falls on the line. Worth keeping in mind is that this contour plot is meant to represent a surface that is above the x-y surface. For any (x,y) tuple we can map to a particular resulting scalar on the plane. The benefit of linearity is that our contours are lines. ![How can I find the angle of the surface/3D Plane. - Mathematics Stack Exchange](https://i.stack.imgur.com/BttwR.png) Now if we wanted to increase the size of a covector, we would want to make our stack (contours) denser: ![](Screen%20Shot%202021-05-04%20at%209.10.06%20AM.png) And to decrease the size of our covector: ![](Screen%20Shot%202021-05-04%20at%209.10.46%20AM.png) We can think about adding covectors as follows: ![](Screen%20Shot%202021-05-04%20at%209.12.55%20AM.png) Okay so we have shown that covectors have sensible scaling and adding rules. That means that we have a [vector space](Abstract%20Vector%20Spaces.md). Now, say we have some ordinary vector space, $V$, with the scaling and adding rules, then the set of all covectors which act on $V$, form a new vector space called the **dual vector space**, which we call $V^{*}$. This has a different set of adding and scaling rules: ![](Screen%20Shot%202021-05-04%20at%209.17.34%20AM.png) In summary: ![](Screen%20Shot%202021-05-04%20at%209.18.10%20AM.png) We can also summarize and state: * **Covectors** are **invariant** (they are purely geometric objects and they do not depend on a coordinate system) * **Covector components** are **not invariant** (a covector will be represented by different row vectors with different components depending on which coordinate system we are using) We know that a column vector represents a vectors components in a given basis. Now we may want to ask: What exactly do row vectors represent? Do they represent covectors components? Yes! ![](Screen%20Shot%202021-05-04%20at%209.22.31%20AM.png) ![](Screen%20Shot%202021-05-04%20at%209.24.39%20AM.png) So we can see that our our $\epsilons are *projecting out* our vector components: ![](Screen%20Shot%202021-05-04%20at%209.26.42%20AM.png) The $\epsilon$ covectors can be visualized as: ![](Screen%20Shot%202021-05-05%20at%207.33.22%20AM.png) We can a general covector, $\alpha$: ![](Screen%20Shot%202021-05-05%20at%207.35.10%20AM.png) Above, we have written a general covector, $\alpha$, which could be any covector of our choice, as a linear combination of the $\epsilon$ covectors. The $\epsilon$ covectors form a **basis** for the set of all covectors. For that reason we call these $\epsilons the **dual basis**. They are a basis for the **dual space** $V^*$. To take a step back, we can visual ### Contravariant vs Covariant * We know that geometric vectors as tensors are invariant to basis transformations (they maintain their magnitude and direction, regardless of how we describe them) and how you have to multiply the inverse of the basis transformation with the vector coordinates in order to maintain that invariance. * A contravariant tensor is an object that when the basis changes, the component of the vector transform with the inverse of the basis transformation matrix. An example is shown below ![](Screen%20Shot%202021-02-28%20at%2010.42.33%20AM.png) * We see that $\overrightarrow{AB}$ maintains has same magnitude and direction before and after the transformation, but the description changes (based on the inverse of the transformation matrix). This is necessary to ensure that we still describe the same vector. * A quick intuition here: If the basis of a vector is doubled, it's components are halved-this is a contravariant tensor. Examples: Distance, velocity, acceleration. * Covariant vectors do not represent a geometric vector (or else they would be contravariant). Instead, they represent a linear function that takes a vector as input (in a specific basis) and maps it a scalar. For instance: $f({ \bf x}) = v_0 x_0 + v_1x_1 + v_2x_2$ * A covariant vector is an object that has an input (vector) and produces an output (scalar), independent of the basis you are in. In contrast, a contravariant vector like a geometric vector, takes no input and produces an output, which is just itself (the geometric vector) * An example of this would be the [gradient](Jacobian-vs-Gradient-vs-Hessian.md#Gradient) ![](Screen%20Shot%202021-02-28%20at%2010.46.24%20AM.png) --- References: * [Tensors, Tensors, Tensors](https://bjlkeng.github.io/posts/tensors-tensors-tensors/) * [What is a tensor (Quora response)](https://qr.ae/pNQWwK) * [Contravariant and Covariant Tensors](https://www.youtube.com/watch?v=nNMY02udkHw) --- References: * [Tensors for Beginners](https://www.youtube.com/playlist?list=PLJHszsWbB6hrkmmq57lX8BV-o-YIOFsiG)