# LightGBM
LightGBM is a form of [Gradient Boosting](Gradient%20Boosting.md).
> In fact, the most important reason for naming this method lightgbm is using [the Goss](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf) method based on this [paper](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf). Goss is the newer and lighter gbdt implementation (hence “light” gbm).
>
> The standard gbdt is reliable but it is not fast enough on large datasets. Hence, goss suggests a sampling method based on the gradient to avoid searching for the whole search space. We know that for each data instance when the gradient is small that means no worries data is well-trained and when the gradient is large that should be retrained again. So we have **two sides** here, data instances with large and small gradients. Thus, goss keeps all data with a large gradient and does a random sampling (**that’s why it is called One-Side Sampling**) on data with a small gradient. This makes the search space smaller and goss can converge faster. Finally, for gaining more insight about goss, you can check this [blog post](https://towardsdatascience.com/what-makes-lightgbm-lightning-fast-a27cf0d9785e).
---
Date: 20230519
Links to:
Tags:
References:
* [Understanding LightGBM Parameters (and How to Tune Them)](https://neptune.ai/blog/lightgbm-parameters-guide)
* [What makes LightGBM lightning fast? | by Abhishek Sharma | Towards Data Science](https://towardsdatascience.com/what-makes-lightgbm-lightning-fast-a27cf0d9785e)