# Neuron
It is useful to take a moment to highlight exactly what a **neuron** is in a neural network. I say this because if you spend too much time in linear algebra land it can be easy to forget. The diagram below shoes multiple visualizations that are quite common. The key is that a neuron is the combination of a single dimension of the hidden layer, along with the activation applied. Let's focus on a single neuron from diagram 3: $h_2$ and $a_2$. This neuron can be thought of as a little function: it takes an input, $h_2$, applies some nonlinear activation function, and then has an output $a_2$.

A key point to realize is that there are **two vector spaces** at play here:
1. $H$: The vector space of dimension $d$ (can be thought of as $\mathbb{R}^d$) that contains the *output* of applying the linear transformation $W$ to our input $x$ (where $x \in X$). Note that:
$W: X \rightarrow H$
2. $A$: The vector space of dimension $d$ that contains the output of applying our nonlinear transformation to $h \in H$.
Note that $H \neq A$ , because they are related via a nonlinear transformation.
---
Date: 20230803
Links to:
Tags:
References:
* []()