# Laplacian (Operator)
Remember, [Operators](Operators.md) are a special type of function that take a function as input and return a function as output.
For instance, consider a function $f(x,y)$ such as that it maps a point (tuple) $(x,y)$ from the input space to a point in $\mathbb{R}$. The [Graph-of-Function](Graph-of-Function.md) can be visualized as:

The laplacian of $f$, denoted with $\Delta$ is going to give us a new scalar valued function of $x$ and $y$. I.e. it will give a new function that takes in a 2-d input and outputs a number:
$\Delta f(x,y)$
Now, it is *kind of* like a second derivative. It is defined as taking the [Divergence](Divergence.md) of the [Gradient](Gradient.md):
$\Delta f(x,y) = div(grad(f)) = \nabla \cdot \nabla f = \overbrace{\nabla \cdot}^{\text{div}} \overbrace{\nabla f}^{\text{gradient}}$
We can read this as:
* $\nabla f \longrightarrow$ this gives use the gradient of $f$, i.e. a vector field
* $\nabla \cdot \longrightarrow$ the divergence of a vector field gives a scalar valued function
We can remember that the [Gradient](Gradient.md) gives us the direction of steepest ascent. Namely, it is a vector field in the input space of $X \times Y$:

Each one of the vectors points in the direction that you should walk such that if this graph is kind of like a hill, the vector tells you which direction you should walk to increase your height most rapidly.
Note that if you look at the vector field below (indicating *the slope on the graph*), we can start to see areas that look like [sources and sinks](Divergence.md#Sources%20and%20Sinks)!

Now, recall what the [Divergence](Divergence.md) is meant to take in a single point from our input space and return how much the local behavior around that point acts as a source or sink.
So, think about what it means to take the divergence of a gradient field:
* Why are points of high divergence pointing away from each other? Because there is a minimum on the graph!

* In the opposite situation, where points are the divergence of points is highly negative because points are converging towards it, the reason the gradient field vectors are pointing towards it is because we are dealing with a *maximum*!
* Hence, the *divergence* of the gradient is very high at points that are like *minima*
* And the *divergence* of the gradient is very low at points that are like *maxima*
So the laplacian operator can be thought of as:
> **Laplacian Operator**: How much of a minimum point is $(x,y)$?
Note that this is very analogous to the 2nd derivative in single variable calculus! In that case we know that:
* $f'' < 0$ at points where $f$ is concave (facing down). This means we are at a maximum on $f$.
* $f'' > 0$ at points where $f$ is convex (facing up). This means we are at a minimum on $f$.

So, the laplacian is an analogue to the 2nd derivative for scalar valued (output is in $\mathbb{R}$) multivariable functions (input is multivariable).
---
Links to: [Gradient](Gradient.md)
References
* [Laplacian Intuition](https://www.youtube.com/watch?v=EW08rD-GFh0)
* [heat equation, 3b1b](https://youtu.be/ly4S0oi3Yz8?t=913)