# Diagnostic Odds Ratio
The cleanest way to think about the DOR is in terms of [likelihood ratios](https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing). There is both a positive and negative likelihood ratio, defined as:
$LR+ = \frac{P(Pred = + \mid Actual = +)}{P(Pred = + \mid Actual = -)}$
$LR- = \frac{P(Pred = - \mid Actual = -)}{P(Pred = + \mid Actual = -)}$
We can describe $LR+$ in words as:
* The probability that our model predicts the positive class, given that the actual class was positive
* Divided by the probability that our model predicts the positive class, given that the actual class was negative
We want our $LR$ terms to be *large*. Being large corresponds with the numerator having a higher probability than the denominator. The numerator is effectively true positives, so this is a good thing.
Now we can define the DOR as:
$DOR = \frac{LR+}{LR-}$
It can also be represented in terms of true / false positives / negatives:
$DOR = \frac{\frac{TP}{FN}}{\frac{FP}{TN}} = \frac{\frac{TP}{FP}}{\frac{FN}{TN}} = \frac{TP \times TN}{FP \times FN}$
This gives us a bit more insight into the $LR$. For instance:
$LR+ = \frac{P(Pred = + \mid Actual = +)}{P(Pred = + \mid Actual = -)} = \frac{TP}{FP}$
Again, the key insight here is that this term is entirely dealing with the predicted positive class. But note that our probabilities are written "backwards". Normally, we can think about a true positive as: "given our model predicted positive, how many actually were positive". In other words, we normally condition on what we predicted. Here we are conditioning on what the actual was.
### Relation to Precision (and also F1)
How does this relate to [precision](Accuracy%20Precision%20Recall%20F1.md)? Well, precision is defined as:
$Precision = \frac{TP}{TP+FP}$
So, the more prevalent the positive class, the larger precision will be (even if the model is dumb and just always guesses positive).
Will the same thing happen with DOR? No! Here is why - say we have an imbalanced problem where the positive class occurred 90 times and the negative 10 times, and say our model is dumb and always predicts positive. If we look at the numerator of DOR:
$\frac{TP}{FP} = \frac{90}{10} = 9$
And the denominator:
$\frac{FN}{TN} = \frac{0}{0} = 0$
Now how does this relate to F1 score?
---
Date: 20240506
Links to:
Tags:
References:
* [Diagnostic odds ratio - Wikipedia](https://en.wikipedia.org/wiki/Diagnostic_odds_ratio#:~:text=In%20medical%20testing%20with%20binary,does%20not%20have%20the%20disease.)
* [Likelihood ratios in diagnostic testing - Wikipedia](https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing)