# Explanations Provide a Fundamentally Different Structure and Search Process Let's start with the following claim: in order to improve the world around you, you must interact with it. This requires that *something* is physically instantiated. Now let us consider the space of all possible physically instantiations. This includes rocks, water molecules, ink on the page of a paperback book, text rendered on an LCD screen, your brain, and so on. This is an enormous, infinite space. Let's call this the **P**hysical **I**nstantiation space, or $\Pi$ for short. We can call some specific physical instantiation, such as the polycarbonate lens of my glasses, $\pi$. $\Pi = \{ \pi \mid \pi \text{ is a possible physical instantiation} \}$ We can now consider how [Evolution](Evolution.md) navigates this space. It does so via *local* trial and error. Imagine we are in the corner of this space that is occupied by sea turtles. In order to create a variant of a sea turtle with a harder shell, evolution will proceed by a random mutation of a gene which leads to the phenotype of a harder shell. If that proves useful in solving some of the [Problems](Problem.md) posed by the environment, it will be preserved and propagated through the gene pool. Notice how local this jump was. The turtle already had a set of genes that coded for a shell. The shells functional *purpose* was to provide protection. But evolution was never going to jump to a shell made of steel or titanium, which would also provide (increased) protection. This is because there is no local viable path to jump from the current chemical composition to one so drastically different. For the purposes of this note, evolution can be thought of as falling in the class of [Local Search](Local%20Search.md) algorithms: evolution uses trial and error to locally search $\Pi$. There is no sense of direction (each jump is random) and no sense of where a certain $\pi$ is in the full space of $\Pi$. Mathematically, we could of course compute a distance between two $\pis, however evolution will blindly trudge forward only ever being concerned about whether or not they are locally adjacent. We can state that at any point in time a given $\pi$ can only move to it's neighbors $\cal{N}(\pi)$. Evolution ends up being constrained because it can only get from $\pi_A$ to $\pi_B$ in this space if there is a path of viable mutations. But there may be no path satisfying this criteria! Both $\pi_A$ and $\pi_B$ may be viable, but there is no way to proceed there. Before considering explanations, let us linger on evolution for a moment. Is there any portion of $\Pi$ that evolution cannot explore? There most definitely is! Evolution cannot access any region of $\Pi$ that requires explanatory knowledge. For instance, consider the theory of [General Relativity](General%20Relativity.md). This is a deep, fundamental explanation about gravity and the nature of [Spacetime](Spacetime.md). It is [*Abstract*](Abstractions.md), but it can be *physically instantiated* in the form of a text and equations in a book, [Knowledge](Knowledge.md) in the brain of the physicist, and so on—each of which is another $\pi \in \Pi$. Evolution will never be able to explore regions in $\Pi$ such as these. Specifically, it will never be able to explore regions that require [Explanatory Knowledge](Explanatory%20Knowledge.md); it is limited to [Non-explanatory Knowledge](Non-explanatory%20Knowledge.md) regions. Now consider [Explanations](Explanations.md) created by [Conjecture and Criticism](Conjecture%20and%20Criticism.md). This is a form of [Explanatory Knowledge](Explanatory%20Knowledge.md). Explanations create an entirely new, rich structure that can be used to navigate $\Pi$. A useful way to think about this structure is that it is a complex space layered atop $\Pi$. We can call this space of explanations $E$. It is effectively a hierarchy of interconnected abstractions—a fabric of sorts. To build intuition for how $E$ interacts with $\Pi$, consider the example of the turtle shell again. When confined to evolution all we have is the specific instantiated $\pi$, and it's neighbors $\cal{N}(\pi)$. Once we have $E$, we have an *arbitrarily rich, extensible* structure of abstractions which we can make use of. For instance, $E$ contains the abstract concepts such as: purpose, protection, analogy, and rust. This allows us to ask: what is the purpose of the shell? To provide *protection*. What else can provide protection? By analogy, other hard materials. However, all hard materials won't due. Turtles spend most of their time in water, so the shell would need to avoid rusting. We effectively can build up an explanation of the role the shell is serving. Given this explanation, we can think about ways of improving the shell. This all relies on the explanation and abstractions in $E$. Let's pause a moment on reflect on that. What exactly do I mean by "arbitrarily" flexible? Simply put, as explanatory knowledge and our understanding of $E$ grow, regions of $\pi$ that were previously disconnected always have the chance of ending up connected. Some regions may forever remain distant because there is no good explanation linking them. However, if an explanation exists linking them, there is nothing stopping us from finding it and creating that connection. As we create new explanations, $E$ will evolve and so will the way we explore it! This of course impacts the areas of $\Pi$ that we arrive at. What about this "hierarchy" or "layers" of $E$ that I mentioned? Let's consider another example. Imagine being the creator of the Winston Churchill statue in Parliament Square. This is yet another $\pi$. Why was this created? This specific $\pi$ required several different explanations, at different layers of $E$. Let us start at the lowest level. The statue is made of copper. Why Copper? Well it was chosen for its material properties. The creators realized the wood, mud, and wax were less resilient. Why Winston Churchill? This requires explanations of war, honor and leadership. Perhaps by some unknown means this $\pi$ could have been created via trial and error. However, even if that was the case it still would have been a local process that took a near infinite amount of time. But by having access to $E$ the creators brought it into existence in just a few months. Notice the power of $E$. It allowed us to jump to entirely disconnected regions of $\Pi$, for when working in the space of $E$, they are actually quite close by! Thus, $E$ allows us to move away from local exploration of $\Pi$ to arbitrarily flexible exploration. It allows arbitrarily complex layering of explanations and abstractions. And, fascinatingly enough, it can be used to *generate* elements of $\Pi$. We can take this one step further. There are times where $E$ is not a mere convenience to navigate different regions of $\Pi$—instead, $E$ is essential to capture what is really going on in $\Pi$. Consider [The Domino That Didn't Fall](The%20Domino%20That%20Didn't%20Fall.md). This is yet another $\pi$. Viewed solely in terms of trial and error and locally similar domino configurations, this set of dominos may be viewed as nearly equivalent to another set with one removed. However, the removal of a single domino may transform this from being a physical system that computes whether a given input number is prime, to a random configuration of dominos that just creates noise. Unlike in evolution of genes where a mutated gene that breaks it's original function will likely causes it's organism to die (thus removing itself from the gene pool), there is no physical feedback mechanism or objective function that exists for the domino computer. The feedback is whether or not it is effectively simulating the abstract concept of primality testing. Without $E$ and the world of abstractions, there is no way to iterate to this particular $\pi$. The benefits of $E$ do not stop there. Both evolution and explanatory knowledge are based on [Virtual Reality](Virtual%20Reality.md) and [Self-Similarity](Self-Similarity.md). Evolution makes use of self-similarity due to the fact that the resulting organisms gene's encode knowledge about the physical world. This is a form of self-similarity. The knowledge must be similar to the physical situation it is meant to deal with. But the self-similarity exists entirely at the level of the physical world ($\Pi$). Explanations on the other hand allow for us to run simulations (a form of [Virtual Reality](Virtual%20Reality.md)) in our heads and generate counterfactuals—would $X$ still occur if $Y$ did not occur? While this is still a physical process (electrochemical activity of our brain, cpu cycles of a computer simulation) the self-similarity can exist at the abstract level. It is in this way that we can [Let Our Theories Die In Our Stead](By%20Criticizing%20Our%20Theories%20We%20Can%20Let%20Our%20Theories%20Die%20In%20Our%20Stead.md). We must also remember that in order to make [Progress](Progress.md) we must advance from [Problems](Problem.md) to better [Problems](Problem.md). $E$ helps there as well, for it opens up an entirely new class of problems—the abstract! ## The Limits of Non-Explanatory Knowledge Evolution fundamentally creates descriptions and predictions. Description is insufficient for progress. Consider the rising and setting of the sun. That this occurs daily, with the rising in the east and setting in the west is a *description*. That this occurs because the earth rotates on it's axis is an *explanation*. Notice how the description provides no new problems. It may allow us to solve a current problem; for example, perhaps it could help design a sun dial that can be used to tell the time, or design a house with optimal windows to capture the suns heat through out the day. However, it does not generate a new problem such as "Why does the earth rotate on it's axis? What would happen if it stopped?" Both evolution and C&R can create *descriptions* and *predictions*. However, C&R creates explanations, from which descriptions and predictions can be *derived*. They are *consequences* of the explanation. These consequences are part of the logical fabric that C&R provides. [Description Is Not Explanation](Description%20Is%20Not%20Explanation.md). Descriptions provide no structure. They are an isolated, disconnected space. There is no concept of "direction". We just have brute force trial and error. Without an explanation, there isn't even a clear way to conjecture, given you have description $H$ whether the next best description to try is $I$ or $J$. Explanation and the construction of $E$ can make use of [Intentionality](Intentionality.md) (though they don't need to). We can seek a good explanation of some phenomena. We can target the gaze of conjecture and refutation on it. Imagine you had the entire description of some phenomena of interest you were interested in studying. Say it was the orbit of Jupiter around the Sun. You have every position of the planet perfectly recorded for over 100 years. It has been perfectly *described*. But where does that leave us? Does that help us understand Saturns orbit? Not one iota. Does it tell us where Jupiter will orbit 10 years from now? Nope. For all we know, in 10 years Jupiters orbit will alter slightly due to some other effect (e.g. a gravitational pull from Saturn). So while we may have perfectly recorded Jupiters orbit, we are entirely isolated and trapped from moving any further. To do that we would need an explanation. An explanation will pose new questions, new problems. It will provide structure that we can leverage and hang on to while exploring outwards. There is no "direction" between descriptions. There is no way to effectively navigate them. Explanations provide just that. Given an explanation of flight and a design of an aircraft and how it works, if the plane fails we can determine the "direction" to move in design space to create a better plane. That is simply not possible if we are in an explanationless paradigm. The only way to improve is to try a continual stream of similar approaches, seeing if one works. Imagine you are the town astronomer in the 1600s. You have memorized exactly where Jupiter was each night for the last ten years. However, now someone asks where Saturn was for the last 10 years, or where Jupiter will be 10 years from now. You don't have this fact planted in your memory so you are completely out of luck. That you know where Jupiter was for the past 10 years does not help you! You might as well know that there are ten thousand grains of sand in the hour glass next to your bed. Because you simply have a description and not an explanation, you have no structure to use if anything changes. A delicious example of this is shown in [The World is a High Tech Oracle](The%20World%20is%20a%20High%20Tech%20Oracle.md). ## Reach We can thus state that evolution is context-dependent, has limited [Reach](Reach.md), and no [Intentionality](Intentionality.md). Evolution is incredibly *local*. Explanatory knowledge on the other hand has reach, it is creative and advances through intentional conjecture and criticism. Therefore, while both evolution and human ingenuity can lead to similar outcomes like flight, the underlying knowledge and the potential for future progress are fundamentally different. Evolution creates specific, non-explanatory "descriptions" with limited reach, while explanatory knowledge allows for general understanding, intentional innovation, and the potential to achieve outcomes far beyond the constraints of biological evolution. The difference isn't just about the time it takes to arrive at a solution; it's about the kind of solution and the open-ended potential for further progress that explanatory knowledge uniquely provides. If we think in terms of our space $\Pi$, notice that explanatory knowledge will be used to generate a specific $\pi$. However, the adjacent possible of nearby $\pi$ when viewed through the space $E$ is exponentially larger. This is because explanatory knowledge has [Reach](Reach.md) and [Universality](Universality.md). Consider flight. Birds flight is very parochial. It is effectively just a description of a single flight implementation. Human explanatory knowledge of aerodynamics, on the other hand, is universal in its applicability.The principles of lift, drag, and thrust, once understood, can be applied to an enormous range of designs and environments—from airplanes to helicopters to rockets that can reach the moon and beyond. This explanatory knowledge is not tied to a specific biological form or a narrow set of conditions. It has reach and can create a vast set of $\pi$ far from the original. %%TODO: **Explanations have reach**. They apply outside of the immediate area that they were created within. E.g. an explanation, which can be thought of as a pi generator, may generate a large number of pi that are consistent with reality, outside of the original pi it was meant to generate%% ## Descriptions and Physical Instantiations are Isomorphic Descriptions and physical instantiations are [Isomorphic](Isomorphism.md)—that is, they have a one to one mapping between them. Explanations and physical instantiations have a one to many mapping between them. For instance, a single, powerful explanation (a scientific theory) typically applies to and elucidates the behavior of a vast number of different physical instantiations. For example, the theory of gravity explains the motion of planets, falling apples, and countless other phenomena. ## A Jump to Universality: An Analogy To Turing Machines There is a strong argument the structure of $E$ provides a jump to [Universality](Universality.md). $E$ can become arbitrarily complex and rich. We can layer on an unbounded level of abstractions. And we can traverse this structure in an unbounded number of creative ways. Consider the jump to universality that was made with the creation of the universal Turing machine. It used to be that we had specific hardware configurations (specific Turing machines) designed to perform specific computation and run specific programs. A universal Turing machine then came along and could run any program (that is physically possible to run)—i.e. any program that any other machine could run. Put a bit more mathematically, let us refer to the set of all programs as $P$ and the set of all Turing machines as $T$. We can then say that we went from: $\textbf{Pre Universal Turing Machine:}\;\;\;\text{For each } p \in P, \text{ there is a unique } t \in T \text{ such that } t \text{ computes } p$ $\textbf{Post Universal Turing Machine:}\;\;\;\text{There exists a } u \in T \text{ such that for every } p \in P, u \text{ computes } p$ Is there an parallel here to evolution vs conjecture and refutation, non-explanatory vs explanatory knowledge? In fact there is! Consider a specific $\pi \in \Pi$, such as a butterfly. This is a physical instantiation that is tied to a specific physical environment. It is this specific physical environment that is required in order to generate the butterfly. The physical environment of the moon simply wouldn't do. So this physical environment was needed to generate the *description* and thus the *physical instantiation* of the butterfly. The parallel that is starting to creep in is: $ \overbrace{\text{Specific Physical Environment}}^{\text{Specific Turing Machine}} \longrightarrow \text{} \overbrace{\text{Specific Physical Instantiation}}^{\text{Specific Program}} $ What changes when we move to explanatory knowledge? At first glance it may appear that explanatory knowledge is still tied to a specific physical environment—after all, an explanation (any element of $E$ for that matter) must be encoded in some physical medium in order to interact with an element of $\Pi$. Could it be that the range of physical environments that could give rise to certain $\pi$ has gotten larger, but it not universal? Amazingly, that is not the case! There has indeed been a jump to universality, we just need to slow down and look for it a bit more carefully. The jump actually occurred because we no longer need *specific physical environments at all*. Drawing on the deep concepts of [Virtual Reality](Virtual%20Reality.md) and [Self-Similarity](Self-Similarity.md), a specific $\pi$ can be [Simulated](Simulation.md) with arbitrarily good accuracy without needing to actually physically instantiate it. This means that instead of a needing to have a specific physical environment to generate our butterfly, explanatory knowledge allows for an arbitrarily accurate simulation of it (in our minds, on a computer, and so on). This simulation can effectively occur anywhere that allows for computation. And while perhaps this may not be possible inside the stream of a Quasar jet, most areas of the universe are most definitely compatible with this form of simulation. We can extend this even further. Another element of $\Pi$ could be the star Proxima Centauri. A specific physical environment did indeed generate Proxima Centauri, but we cannot hope to recreate that environment near earth—it is not that the laws of physics prevent it, but it would likely conflict with other environments we wished to render. However, explanations allow us to simulation the environment and Proxima Centauri. The principles of [Virtual Reality](Virtual%20Reality.md) and [Self-Similarity](Self-Similarity.md) ensure that we can do so with arbitrary accuracy. But it no longer requires the *specific environment* that created Proxima Centauri—one that is over 4 light years away. Any physical environment that can support computation and the storing of knowledge will do. The same applies in the case of designing a plane. No longer do we need to try each and every variation of planes by flying them in the sky. We can simulate them are various levels. Some plane designs are ruled out simply based on marks of ink on paper—it is clear that they are poor designs based on known laws of physics. Some plane designs are ruled out during simulation in a wind tunnel. And of course some planes are ruled out when actually placed in their final environment, 40,000 feet above ground. Put another way, the self-similar nature of reality means that we can render any aspect of physical reality arbitrarily well. This removes the constraint of requiring a specific physical reality. But wait, does this require explanatory knowledge? Or can non-explanatory knowledge, created by local trial and error, exploit this? Well to be clear, evolution most certainly *does* make use of self-similarity and virtual reality rendering. However, it does so in a way that entirely tied to the local physical environment. Evolution tries to capture elements of the local physical environment in genes. The self-similar nature of reality allows for that. But this is entirely tied to the local physical environment and cannot extend beyond that. Explanation allows for *arbitrary extension beyond that*. Sure that will always depend on what portions of $E$ we have discovered and are available to us. But $E$ will always be expanding and evolving, meaning nothing is fundamentally out of reach—provided the laws of physics don't prevent it. ## Recap A quick recap is in order. Both evolution and explanatory knowledge have:  - Concepts of error correction - Context dependence (explanatory knowledge is always tied to solving a specific problem) But the key differences are: * Explanation provides an entirely new and rich structure and search process a top $\Pi$ * Explanations have reach * Explanations provide a jump to universality * Explanation does not require working with $\pis directly. It can use self similarity and simulation instead * Explanations provide an entirely new class of problems * Explanations can be created [Intentionally](Intentionality.md) --- Date: 20250326 Links to: Tags: References: * []()