# Description Is Not Explanation Understanding what an [Explanation](Explanations.md) *is* can be also be [understood via what it is not](Understand%20Systems%20via%20What%20They%20Are%20Not%20Doing.md). Explanation is *not* [description](Description%20Is%20Not%20Explanation.md) - they are fundamentally different beasts. In a sense to understand why either is, you must understand the other - just as a positive particular is in fact defined as being positive by the fact that it has a charge and it is *not* negative. [Explanations](Explanations.md) [Constrain](Constraints.md) what can be by defining what is possible and plausible within a framework of understanding. They constrain by setting the boundaries of what is possible, plausible, and consistent with our understanding of reality. They provide a framework for making sense of the world, limiting the range of potential descriptions and predictions. [Descriptions](Description.md) also provide [Constraints](Constraints.md), but they do so by defining the observable and measurable aspects of reality. They constrain by providing a set of facts that any valid explanation must account for. They define the realm of what has been observed and measured. Descriptions constrain by highlighting phenomena that require explanation. ### [Fabric of Reality](Fabric%20of%20Reality.md) Point of View To understand requires that we can [explain](Explanations.md). Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them. Predictions and descriptions in physics are often expressed as mathematical formulae. Suppose that I memorize the formula from which I could, if I had the time and the inclination, calculate any planetary position that has been recorded in the astronomical archives. What exactly have I gained, compared with memorizing those archives directly? The formula is easier to remember — but then, looking a number up in the archives may be even easier than calculating it from the formula. The real advantage of the formula is that it can be used in an infinity of cases beyond the archived data, for instance to predict the results of future observations. It may also yield the historical positions of the planets more accurately, because the archived data contain observational errors. Yet even though the formula summarizes infinitely more facts than the archives do, knowing it does not amount to understanding planetary motions. Facts cannot be understood just by being summarized in a formula, any more than by being listed on paper or committed to memory. They can be understood only by being explained. *Fortunately, our best theories embody deep explanations as well as accurate predictions*. For example, the general theory of relativity explains gravity in terms of a new, four- dimensional geometry of curved space and time. It explains precisely how this geometry affects and is affected by matter. That explanation is the entire content of the theory; predictions about planetary motions are merely some of the consequences that we can deduce from the explanation. Above it is worth noting that DD is not saying that formulas are *merely* descriptive. As he highlights, our best theories - often encoded via a formula, such as [General Relativity](General%20Relativity.md) - often embody deep explanations. But again, if you simply memorize the formula, you have not [understood](What%20it%20Means%20to%20Understand%20Something.md) it. Also see [Explanations Imply Consequences](Explanations%20Imply%20Consequences.md). ### [Understanding Transformers via N-gram Statistics](Understanding%20Transformers%20via%20N-gram%20Statistics.pdf) In [Understanding Transformers via N-gram Statistics](Understanding%20Transformers%20via%20N-gram%20Statistics.pdf) a great distinction between description and explanation is provided: > A **description** merely requires that we can provide a post-hoc, per-instance approximation of transformer predictions in terms of an available rule. An **explanation** means we provide reasons for and thus can predict in advance why and when a particular rule approximates transformer predictions. Hence, we make the distinction between *form* (description) and *selection* (explanation). ### A Simple Example We can square these two similar points of view with a simple example. Imagine we are dealing with a system of automobile traffic. We are interested in what the traffic will be like at 3 pm. Given that we see there is bumper to bumper congestion stretching for miles, we could *describe* the reason for this as satisfying the rule: too many vehicles flowed onto the road in too short a time (for example maybe 1000 cars flowed in and only 100 flowed out). But that doesn't tell us *why* that happened. Maybe it was the day before a holiday and everyone was breaking from work early. Maybe a storm warning had come in and people wanted to make it home before it arrived. It could be any one of an infinite number of things, but the point is, our simple description does not shed light on any of this. For that we require and *explanation*. We need to get to the *why* - why is there such traffic? The description is not useless - for instance it does tell us that the traffic could have been approximated by the simple rule of "too much inflow in too short a time". That at least rules out that all the cars on the road had been airlifted in and set down by helicopters. But it does not explain why too much inflow occurred in the first place. ### Another (Classic) Example Consider the phenomena of "an apple falling". A description of falling will tell us _what_ happens, but not _why_ it happens. For instance, stating that an apple falls from a tree at 9.8 m/s² is a description of the phenomenon. Even more generally, a description could simply be "things fall on earth". A **description** is essentially a summary or a set of observations about what happens. It could be mathematical, like the equations of motion in physics, or purely observational, such as describing the behavior of an object or phenomenon. An explanation of the falling apple will explain that it falls due to the force of gravity acting between the apple and the Earth, which according to the theory of general relativity is really based on a curved space time. This provides a deeper understanding of the phenomenon beyond mere description. So an **explanation** goes deeper and seeks to uncover the underlying *causes* or reasons for why something happens. It addresses the _why_ and _how_ questions, providing insight into the mechanisms or principles that produce the observed phenomena. Deutsch argues that explanations are more powerful because they offer understanding and the potential to predict and manipulate phenomena. ## Example I need to clean up Thought experiment: Say I have two strings that are produced. How can I understand if one is just a description, and one an explanation? * Explanations always reference causes. And causes always relate entities (broadly defined) to one another. So the string would need to relate entities to one another * So say the string was "the glass fell because the dog hit the table". This relates "the glass falling" to the "dog hitting the table". * As to whether the explanation is a good one is a different matter altogether. Is it hard to vary? Does it solve the problems it purports to solve? It it constraining? * A description won't reference causes. It will just record some state * Ex 1: Jupiter was observed at location X on day Y (no reference as to why) * Ex 2: The tree was 10 feet tall in 2025, 13 feet tall in 2026, 18 feet tall in 2027... (just describing facts) * But notice that it can be hard to say this isn't an explanation! A (bad) explanation could be "the tree is taller because time passed, and time fuels trees to grow" * This is a bad explanation though! It addresses no mechanism—why does it not apply to rocks? It is just a correlation, not an explanation. The explanation is so general it could explain nearly anything ("the dog died. Why? Because time passed.) # Best Explanation of this: As it relates to [Content](Content.md) One thing I've noticed is that while a description feels very specific and is the opposite of vague, it often lacks the depth we want. The key here is that a description has less content than a general statement from which you could derive the description. This likely has something to do with language and its role in capturing properties of reality. While this is beyond my current expertise, we can succinctly say that if you have a description of something like planetary motion, an explanation could have derived that description. Given a choice between a description (an artifact downstream of an initial explanation) and the upstream explanation itself, you'd prefer the explanation. This preference exists because the explanation not only includes the artifact (the description) but also provides loads of other information you can criticize and deduce from. ## Maximizing Criticism The goal is to maximize our means of criticism. By seeking out explanations and generating processes, we can derive descriptions, which often serve as our objective. This principle guides us to improve our theories by providing more ways to criticize them. To summarize: We aim to enhance our ability to criticize theories by seeking explanations rather than just descriptions. This approach leads us to more comprehensive understanding and better theoretical development. --- Date: 20240823 Links to: [Understanding Transformers via N-gram Statistics](Understanding%20Transformers%20via%20N-gram%20Statistics.pdf) [Fabric of Reality](Fabric%20of%20Reality.md) [Asking Better Questions](Asking%20Better%20Questions.md) [What it Means to Understand Something](What%20it%20Means%20to%20Understand%20Something.md) Tags: References: * []()