# **Problem Reframing and Alignment** In any learning system, we must ensure that our **learning objective** is aligned with our overall problem. To see the importance of this, consider the following example. Let us have a scenario where we wish to build a content recommendation system. The goal is to recommend content that the user will be most interested and engaged in. Now, suppose we **frame** this as a ***classification* task** that predicts the probability of a user will click on a thumbnail. At first this may seem like a good idea. We certainly have tons of historical data capturing what videos different users clicked on, so this framing wouldn’t be that difficult to capture. However, this actually isn’t in line with our overall goal! If we follow this approach, we will really just be optimizing for *click bait*, since that will sure have the highest probability of being clicked. What we really wish to optimize for is quality content. In that case a better target variable would be *watch time*, and we can reframe this as a ***regression* task**. So for a given user we would take in a list of videos, predict the expected watch time, and then select the highest videos. We could also keep the problem as a classification task, but update our target variable to be *did the user watch at least half of the video?* There are often many ways to reframe, but we must always consider our problem holistically. --- Date: 20220801 Links to: [Data Science Mental Models](Data%20Science%20Mental%20Models.md) Tags: #review References: * []()