# Probabilistic Deep Learning We can with a traditional deep learning system that is simply a linear regression. ### Base Case ![](Screenshot%202023-03-17%20at%208.40.45%20AM.png) ### Aleatoric Uncertainty ![](Screenshot%202023-03-17%20at%208.43.38%20AM.png) ### Epistemic Uncertainty The question is, do we actually have enough data to learn the above mean and variance with confidence? In other words, are the red and green lines the *true* ones? ![](Screenshot%202023-03-17%20at%208.45.26%20AM.png) ![](Screenshot%202023-03-17%20at%208.47.10%20AM.png) ### Is a line even the right thing to fit? ![](Screenshot%202023-03-17%20at%208.48.04%20AM.png) ### A few random notes & thoughts See [here](https://youtu.be/i5PEMt21dO8?list=PLBjSxdPpAJGz-zSjO1Lpkc-0ibLTcz2o9&t=1768). It is often easier to think about the question: “does our data have gaussian noise” compared to “should I use a square error loss function”? When not using the bayesian approach modeling can feel very *ad hoc*. ![](Screenshot%202023-04-02%20at%202.47.40%20PM.png) Priors are not regularizers. It is a fundamentally different mechanism for reaching our predictions. We are not doing optimization, we are doing marginaliation. Source of uncertainty: * Data point is unlike what we have seen (credibility). Or we have just seen a few examples here. * Data point is similar to what we have seen, but the output is just probabilistic (i.e. we predict a distribution as our output) * Data point is similar to what we have seen and model is confident, but will this still hold up for tomorrow? Nothing is preventing our data to suddenly have a different relationship/response tomorrow. This feels like it can be learned via our 2 years of data. We can learn *how often does that happen* and *what types of data points does it happen to*? --- Date: 20230317 Links to: Tags: References: * [TensorFlow Probability: Learning with confidence (TF Dev Summit '19) - YouTube](https://www.youtube.com/watch?v=BrwKURU-wpk) * [Bayesian Deep Learning — ANDREW GORDON WILSON - YouTube](https://youtu.be/i5PEMt21dO8?list=PLBjSxdPpAJGz-zSjO1Lpkc-0ibLTcz2o9&t=1768) * [G. Grosch, F. Lässig - Darts: Unifying time series forecasting models from ARIMA to Deep Learning - YouTube](https://youtu.be/thg10qDqpRE?t=1815)