****# Moravec's Mistake: Possibility Pumping Masquerading as First Principles Thinking
%%TODO: Consider renaming this Essay "Logical Possibility Pumping Masquerading as First Principles Thinking. Then, in the intro you can call out that you were unsettled by Dust Theory, Boltzmann Brain, and Simulation Hypothesis, as well as HM's argument. You chose this once to argue against. But they all suffer from the same flaw%%
%%TODO: I hate this intro - it feels very click baity - rewrite it to reflect your own concern as you went through HMs argument. E.g. "One of the most unsettling things as a human is to be told that something you most deeply hold to be true—your belief about reality—is wrong...this is exactly what happened to me as I read Moravec's essay and was told that rocks were conscious. Consider also adding a brief dialogue at the start to help with engagement. You can have a character that is you and another that is moravec of mr witt"%%
%%TODO: Make point about not wanting to outsource our thinking. Bring in Hazlit Quote %%
%%TODO: provide some context—you came across this in Permutation City (even GE couldn't rule this out), Simulation Hypothesis, Boltzman Brain %%
Take a look around you and notice the first thing you see—a bookshelf, a dining table, a candle. Maybe you're on an airplane, staring at the tray table stowed away in front of you. Whatever the case, focus on that object. Now consider this: *That object is [Conscious](Consciousness.md)*.
At least, that is what Hans Moravec argued in his essay [*Simulation, Consciousness and Existence*](https://www.organism.earth/library/document/simulation-consciousness-existence). According to Moravec, your television remote, the jar of peanut butter in your pantry, and even a rock—they are all conscious.
How does he arrive at this unsettling conclusion? His reasoning follows a meandering path, but at its core, it rests on two fundamental steps:
1. He argues at length that *anything can be viewed as a simulation of any possible world*.
2. From there, he concludes that even the thermal jostling of rocks can be viewed as a complex, self aware mind: Rocks are conscious.
This is not just an abstract philosophical musing—it is a radical claim about the nature of reality itself. If Moravec is right, the distinction between thinking beings and inanimate matter dissolves. A computer chip and a coffee mug might be just as “aware” as you or me.
But is this true? Can a rock _really_ be conscious? Or is this a failure of reasoning masquerading as deep insight?
I believe Moravec’s position is fundamentally flawed, and I intend to show why. First, I will [Steelman](Steelman%20Argument.md) his argument, presenting it in its strongest possible form. Then, I will demonstrate why it collapses under scrutiny. Moravec’s mistake is not just an intellectual curiosity—it is a profound failure of reasoning and bad philosophy.
# Moravec's Machine and the Chain of Consequences
%%TODO: need some sort of introduction here - help lay out the pieces we are going to need to touch on, specifically how we require simulation - what is the map of where we are going?%%
%%TODO: Use headings that act as a visual metaphor - e.g. Moravec's home base, Building Momentum, A Slippery Slope Towards Tautology, etc...%%
###### Gearing Up: A Prelude on Logical Deduction
%%TODO: bring in possibility pumping%%
%%TODO: bring in logical content%%
Throughout Moravec's entire argument, we will find logical deduction lurking in the background. Logical [Deduction](Deduction.md) is effectively a machine.
We start with a set of *axioms* or *assumptions*. We pass one of these axioms to the machine as an *input*; we can call this our *seed*. The machine then has a small screen upon which it displays a list of *rules of inference* that can be applied to the input. We select one of those rules to be applied, and the machine spits out a new statement. We then can take this statement and pass it back into the machine as a *new input*, at which point the machine will let us apply a new rule, spit out a newly updated statement, and so on.
%%TODO: Draw the machine and a single rule%%
We can think about this process as generating a *chain of inference*:
%%TODO: Draw a resulting chain of inference%%
What I just described was the machine being operated in *manual mode*. This is where the user is in the loop, selecting the rule of inference to be applied at each step. However, the machine also has a *recursive mode*. In this mode the machine recursively generates chains of inference. Say we have a chain $C$ composed of individual statements, with the final statement being $s_k$:
$C = \{ s_1, s_2, \dots , s_k\}$
This statement $s_k$ is passed to the machine as input, and the machine then applies each rule to $s_k$ independently. If the machine has a repertoire of five rules $\{r_a, r_b, r_c, r_d, r_e\}$ that can be applied to $s_k$, then it will have created five new chains. The chain $C_a$ (where rule $r_a$ was applied) is just one of those chains:
$C_a = \{ s_1, s_2, \dots , s_k, s_{k+1}\}$
$\text{where} \;\;\;\; s_{k+1} = r_a(s_k)$
%%TODO: Draw this%%
The key idea to all of this is that given a set of initial assumptions and rules of inference, one can *mechanically* derive truth claims that were already embedded in those assumptions. This is not an act of deliberate construction, but a blind, recursive unfolding—a computational process that, once set in motion, is out of our control.
This idea of a chain of inference can be extended outside the sterile environment of logic and into the world of reasoning more broadly. The result would be a mechanically constructed *chain of reasoning*. %%TODO: strengthen this - also, update the analogy / visual to combine the idea of mechanically moving forward with adding links to the chain%%
Moravec structures his argument in much of the same way. He begins with a simple assumption and follows its consequences wherever they lead. His reasoning lurches forward mechanically, operating with only local concerns—fixated on its present location and the next legal rule, disregarding the broader landscape. Each step takes him deeper into unsettling territory, but he moves forward always checking to be sure that there is no rule forbidding him from doing so.
The result is not an argument he stands a top triumphantly, but instead looks upon uneasily. It is a chain of reasoning that was unsettlingly unspooled rather than confidently constructed, leading him to conclusions he seems unwilling to embrace, yet is unable to reject.
We will now explore this chain of reasoning, starting with it's initial assumption: it's seed.
###### The First Link: The Seed of a Physical Fundamentalist
We must start with our *seed* - the initial assumption that sets our machine in motion. The seed is Moravec's philosophical foundation. He views himself as a "physical fundamentalist":
> [!quote]
> During the last few centuries, physical science has convincingly answered so many questions about the nature of things, and so hugely increased our abilities, that many see it as the only legitimate claimant to the title of true knowledge....I myself am partial to such “physical fundamentalism.”
Although he only describes this position briefly, we can infer that is a [Superset](Superset.md) of [Physicalism](Physicalism.md) and [Reductionism](Reductionism.md). Physicalism is the philosophical position that everything that exists is either physical in nature or depends on physical processes. It asserts that all phenomena, including mental states, consciousness, and abstract concepts, can ultimately be explained in terms of physical entities, properties, and laws. Reductionism is the view that all scientific explanations are *reductive*. A reductive explanation is one that works by analyzing things into lower level components.
###### The Second Link: Moravec's Machine Spits Out Solipsism and Simulations
Given the initial seed—reality is nothing more than physical processes governed by strict laws—then one might assume our perceptions reflect the physical world. However, Moravec concedes that we can never be certain of this. He acknowledges that we cannot *logically* rule out solipsistic scenarios —such as [Descartes Evil Demon](Descartes%20Evil%20Demon.md) or a [Brain in a Vat](Brain%20in%20a%20Vat.md)—where everything we experience might be an elaborate [Simulation](Simulation.md):
> [!quote]
> Physical fundamentalists, however, must agree with René Descartes that the world we perceive through our senses could be an elaborate hoax. In the seventeenth century Descartes considered the possibility of an evil demon who created the illusion of an external reality by controlling all that we see and hear (and feel and smell and taste).
At this point we can take stock of Moravec's argument:
1. He is a physical fundamentalist.
2. Given that, he is not able to logically refute Solipsism. Everything we experience could be part of some fabricated simulation.
###### Interlude Part 1: A Simulation is Defined By Internal Relationships
As this chain of reasoning begins to take us into unfamiliar territory, I must take a brief aside to address a critical question—fundamentally, what is a [Simulation](Simulation.md)? How does Moravec define it, and what are its essential properties? The rest of his argument will hinge on this concept of simulation, so we need to get our arms around just what it is.
Moravec starts by referencing a benign, familiar class of simulations, such as those of weather or aircraft flight. This class generally is run in order to provide some output for an external observer, such as answers and images. For example, a weather simulation may produce pictures of evolving cloud cover. But he quickly makes it clear that while most simulations we think of today are designed to provide some output for an external observer, that is not what defines a simulation at it's core:
> [!quote]
> Inside the simulation events unfold according to the strict logic of the *program*, which defines the “*laws of physics*” of the simulation...The simulation’s *internal relationships* would be the same if the program were running correctly on any of an endless variety of possible computers, slowly, quickly, intermittently, or even backwards and forwards in time, with the data stored as charges on chips, marks on a tape, or pulses in a delay line, with the simulation’s numbers represented in binary, decimal, or Roman numerals, compactly or spread widely across the machine. There is no limit, in principle, on how indirect the relationship between simulation and simulated can be.
Thus in Moravec's view a simulation is defined by [Intrinsic](Intrinsic.md) [Rules](Rules.md) and the resulting internal relationships being followed and the entities that follow them. These intrinsic rules *define* the simulation. Based on the principle [Computational Universality](Universal%20Computer.md), the substrate upon which the computation is run is irrelevant. After all, [Computation is Just Following Rules](Computation%20is%20Following%20Rules.md). This makes simulation a very unique phenomena. Like [Knowledge](Knowledge.md) or [Information](Information.md) it must be instantiated via some substrate, but it is substrate independent: it is [Abstract](Abstractions.md).
It is worth stating this again for emphasis: *all that a simulation consists of is it's intrinsic rules and subsequent internal relationships*. But this then raises a question: if a simulation is entirely defined by a set of intrinsic rules, what does it mean to be *outside* or *external to* those rules?
Consider the game of chess. It is defined by a set of clear, finite rules that determine how the state of the board will evolve at any point in time. If we recall our digression into logical deduction, there was a "recursive mode" that we could run that allowed the system to mechanically lurch forward without any external user in the loop. The same scenario applies here. Given the initial state of the board, the game of chess could be run in recursive mode—game state trajectories could be generated by simply taking the state of the board, applying a valid rule (moving your queen to A7), and repeating. No external user required.
Let us now consider two types of external observers. The first is a *player* of the game, Ortho. Ortho is a principled player who abides by the intrinsic rules of the game. It may be surprising that they are an external observer, but that is a simple consequence of how a simulation was defined: it only consists of it's intrinsic rules. A player of the game is not part of those rules. However, a player can interact with the game by knowing the rules (effectively instantiating them in their mind) and selecting a move that takes an initial board state and evolves it in a valid way according to the intrinsic rules. This player is external to the simulation, but can interact with it so long as they do so in ways that respect the rules.
The second external observer is Scorch. Scorch sees that the player he is routing for is on the brink of losing the match. He decides to take a blow torch and set the opposing players king ablaze, thinking that he just helped his preferred player win. But is that accurate? Did Scorch actually help his preferred player win? Clearly he did not! He took an action that was fully outside of the simulations intrinsic rules. Torching the king was only possible because the abstract rules of chess had to be encoded in some physical medium—in this case the wooden chess board and pieces. If the game had been played virtually on a classic computer, Scorch could have "torched" the opposing king by setting the bit of memory representing it's state from a 1 to a 0. Again, that would be entirely outside of the intrinsic rules of the game and entirely invalid. The only way to interact with a simulation is by ensuring that you conform to it's abstract rules (that again are instantiated in some physical medium).
This does not merely apply to games—we can apply it to *any* rule based system. Consider the system of Roman numerals. In this system there is no symbol for $0$. Thus, if we were to try and perform the operation $\text{XXI} - \text{XXI}$, we would be firmly rejected and run into a dead end: inside this system that particular operation statement is undefined. From the vantage point of our modern brains, laden with knowledge of the Hindu-Arabic numeral system, we can *see* that this statement just resolves to $0$. However, seeing that required us to *jump outside* the system of Roman numerals, for inside the system that operation is undefined[^5]. To us it may seem that this restriction is entirely unnecessary. However, that is not what is important here. What is important is that the system of Roman Numerals does not include the concept of $0$ - it is *external* to it.
%%TODO: Create a visual of this - it can actually be visualized. Think about the "space" of valid operations in a formal system - this likely has a tree like structure%%
%%TODO: tie this in with the above paragraph more effectively%%
This raises a bit of a sticky question: if a simulation is entirely *defined* with respect to it's intrinsic rules—thats what it *is*—then how does an external observer who is *not a part of the system* interact with it? In the case of the chess player, observing is easy. The representation of the game just needs to be mapped to a form the chess player understands. This is usually in the form of some two dimensional grid. Of course an equivalent representation would be a list of all grid positions and the occupying pieces. Additionally, a chess simulation was designed specifically for *interaction*. While it can certainly be run in a "recursive mode", valid trajectories can easily be generated with external users in the loop, so long as they follow the rules.
However, some simulations are not nearly as conducive to external observers or interaction. A delicious example of this is given in the book [Permutation City](Permutation%20City.md), where one of the protagonists, Maria, is working with a self-contained simulated universe (run via a [Cellular Automaton](Cellular%20Automaton.md)[^6]) finds herself frustrated by the very nature of its intrinsic [laws of physics](laws%20of%20physics.md). These intrinsic rules step forward solely in recursive mode; that is how the cellular automaton is constructed. Given the current state of the system, a single rule is repeatedly applied. There is no room for an external observer to select the next rule as there was in chess.
Maria examines petri dishes filled with simulated bacteria, but their appearance is entirely determined by the viewing software—which assigns false colors to represent bacterial health. But, as she realizes, all views of a simulation are inherently artificial, just like any map that color-codes data to highlight specific attributes. There is no such thing as a “raw” view of a simulated world—only different levels of abstraction and stylization, dictated by the interface translating the simulation’s internal state into something intelligible to an outside observer.
%%TODO: Explain that the reason we can never see a raw simulation is because it is actually governed by abstract rules - see nblm%%
%%TODO: explain that in order to "manipulate" the system the external viewing program has to pause the system, take the systems state, convert it into some external form that maria can manipulate, then she can manipulate it, and then the viewing software has to convert it back somehow to a new game state? This may be incredibly challenging because it requires skirting the rules%%
In regards to interaction, the simulation allows Maria to manipulate molecular structures freely, but only by suspending its actual intrinsic rules. She considers designing a more “authentic” interaction method but realizes the paradox: so long as she remains external to the simulation, she can only manipulate its world by breaking its intrinsic laws at some level. She can *shift* where the rules were violated, but she must always violate them. In short, true interaction from the outside is impossible without disrupting the system’s self-contained logic.
We've seen that she cannot *change* the system, or else they have inherently become part of it. For it to remain a self contained simulation, they must only *observe* it.
%%TODO: Another visual is helpful here - making it clear being inside or outside the system%%
Keeping in mind this notion of being inside or outside the system, Moravec goes on to describe how if any external observer wants to view the simulation, they will need a program to translate it's internal representations into one's convenient for them. Currently, most simulations are designed with the goal of providing outputs that can be interpreted by special external viewing programs—this was the case with the examples of weather and flight from earlier:
> [!quote]
> Today’s simulations, say of aircraft flight or the weather, are run to provide answers and images. They do so through additional programs that translate the simulation’s internal representations into forms convenient for external human observers...A simulation, say of the weather, can be viewed as a set of numbers being transformed incrementally into other numbers. Most computer simulations have separate viewing programs that interpret the internal numbers into externally meaningful form, say pictures of evolving cloud patterns.
However, Moravec continually states that a simulation does not *depend* on any external interpretation - what matters are the internal relationships %%TODO: should this quote be moved above to the original definition of simulation? Does it help with flow?%%:
> [!quote]
The simulation, however, proceeds with or without such external interpretation.
%%TODO: Create a visual here of internal relationships and how they define a simulation, and how an external observer needs to translate via some viewing program%%
%%TODO: Think about Pinker's argument about helping a reader see. Should I include a visual argument about 2d vs 3d structures? Does that help explain my main argument and does it help highlight the flaws of Moravecs? Being inside or outside the system. Does that somehow relate to the encoding vs decoding argument? I feel like HM is constantly mixing up the system he is talking about—sometimes the simulation is self contained, other times the interpretation is part of the system. %%
###### The Third Link: Consciousness Can Be Simulated
Armed with a deeper understanding of Moravec's views on simulation, we can march forward. Moravec now turns his attention to consciousness. From his initial entry point of solipsism, he then argues that even consciousness can be simulated:
> [!quote]
> Today’s virtual adventurers do not fully escape the physical world: if they bump into real objects, they feel real pain. That link may weaken when direct connections to the nervous system become possible, leading perhaps to the old science-fiction idea of a living brain in a vat. The brain would be physically sustained by life-support machinery, and mentally by connections of all the peripheral nerves to an elaborate simulation of not only a surrounding world but also a body for the brain to inhabit.
>
> The virtual life of a brain in a vat can still be subtly perturbed by external physical, chemical, or electrical effects impinging on the vat. Even these weak ties to the physical world would fade if the brain, as well as the body, was absorbed into the simulation. If damaged or endangered parts of the brain, like the body, could be replaced with functionally equivalent simulations, some individuals could survive total physical destruction to *find themselves alive as pure computer simulations in virtual worlds*.
And just as simulation in general did not depend on external observers, the same goes for any conscious inhabitants that reside inside the simulation: they will exist regardless of whether they are ever externally observed:
> [!quote]
> A simulated world hosting a simulated person can be a closed self-contained entity. It might exist as a program on a computer processing data quietly in some dark corner, giving no external hint of the joys and pains, successes and frustrations of the person inside...Conscious inhabitants of simulations experience their virtual lives whether or not outsiders manage to view them.
Thus simulations will proceed and any [Conscious](Consciousness.md) inhabitants will experience their virtual lives whether or not they are externally interpreted. What matters is simply the internal relationships.
###### Interlude Part 2: Simulations Must be Encoded Via Physical Processes
So far, when discussing traditional simulations—such as weather or aircraft cockpits—we have implicitly assumed they are [Programs](Program.md) running on classical computers. After all, programs are [Abstractions](Abstractions.md), and since [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md), a program must be encoded in a physical substrate and then instantiated in order to perform any computation.
However, due to the principle of [Computational Universality](Computational%20Universality.md), Moravec recognizes that *any* physical substrate can, in theory, support computation. In the case of a classical computer, the substrate is a silicon microprocessor containing transistors that act as electronic switches that can store and manipulate information. But computation is not tied to silicon—it can be implemented in any physical medium that supports the ability to reliably [follow and execute rules](Computation%20is%20Following%20Rules.md). This can be accomplished by mediums as diverse as [Babbage's Analytical Engine](https://en.wikipedia.org/wiki/Analytical_engine), a [complex arrangement of spring loaded dominos](https://en.wikipedia.org/wiki/Domino_computer), or a [Human Computer](Human%20Computer.md) consisting of a vast formation of humans placed into groups, each acting as a logic gate to collectively perform binary calculations.
Based on this he writes:
> [!quote]
Just as a literary description of a place can exist in different languages, phrasings, printing styles, and physical media, a simulation of a world can be implemented in radically different data structures, processing steps, and hardware. If one interrupts a simulation running on one machine and translates its data and program to carry on in a totally dissimilar computer, the simulation’s intrinsics, including the mental activity of any inhabitants, continue blithely to follow the simulated physical laws. Only observers outside the simulation notice if the new machine runs at a different speed, does its steps in a scrambled order, or requires elaborate translation to make sense of its action.
Since we have established that computation is not tied to any specific medium, it follows that simulations can, in principle, be run on any physical substrate. Moravec thus defines exactly what it means for some [Physical Process](Physical%20Process.md) to [Encode](Encoding.md) a simulation:
> [!quote]
> What does it mean for a process to implement, or *encode*, a simulation? Something is palpably an encoding if there is a way of *decoding* or *translating* it into a recognizable form. Programs that produce pictures of evolving cloud cover from weather simulations, or cockpit views from flight simulations, are examples of such decodings.
Moravec is arguing that a process implements or encodes a simulation if there is a way to mechanically [Decode](Decoding.md) or [Translate](Translation.md) it into a recognizable form. In the case of our classic computer the physical process of charges being moved around on a silicon chip is *encoding* a simulation of the weather. It is an encoding because there exists an external program that can translate it into a form we recognize, namely images and interpretable numbers. Note that Moravec's definition of an encoding does *not* depend on the physical substrate instantiating the intrinsic rules of the simulation—it simply depends on being able to decode the states of the substrate into some recognizable form.
Having defined what it means to encode a simulation, Moravec now defines what it means to [Interpret](Interpretation.md) one:
> [!quote]
> An *interpretation* of a simulation is just a mathematical mapping between states of the simulation process and views of the simulation meaningful to a particular observer.
A simulation can only be [Interpreted](Interpretation.md) via an additional program that [Translates](Translation.md) the simulations internal representations into external representations convenient for external observers. Translation is a purely mechanical process. On it's own it provides no meaning and it only is concerned with converting one representation to another. Interpretation is a specific mapping that takes states of the simulation and maps to views meaningful to some observer. Thus, interpretation is just a specific type of translation - one that yields a meaningful output with respect to some observer. So, we could say that for a process to encode a simulation there must be a means of interpreting it in a meaningful way.
%%TODO: I think this paragraph is confusing as it currently stands. Do we need to detail the difference between interpretation and translation%%
%%TODO: Create a visual here based on the above where you show how a set of internal relationships can be encoded in many different physical substrates. This visual should have a label for encoding, decoding, interpretation - see green notebook drawings %%
%%TODO: Create an image that shows interpretation is a subset of translation.%%
%%TODO: include all observations are theory laden - and thus require interpretation%%
###### The Fourth Link: Accept All Mathematically Possible Decodings
Again, let us take stock of Moravec's argument:
1. He is a physical fundamentalist.
2. Given that, he is not able to refute Solipsism. Everything we experience could be part of some fabricated simulation.
3. Consciousness can be simulated
While today's programs are specifically *designed* to follow a simple decoding process, Moravec also realizes that not need be the case. He takes this idea to it's natural limit:
> [!quote]
> As the relationship between the elements inside the simulator and the external representation becomes more complicated, the decoding process may become impractically expensive. Yet there is no obvious cutoff point. A translation that is impractical today may be possible tomorrow given more powerful computers, some yet undiscovered mathematical approach, or perhaps an alien translator...Why not accept all mathematically possible decodings, regardless of present or future practicality? This seems a safe, open-minded approach, but it leads into strange territory.
>
Moravec looked around, saw there was no signage forbidding him from accepting all mathematically possible decodings, and mechanically stepped forward.
###### The Fifth Link: Anything Can Be Viewed As a Simulation of Any Possible World
Given that we will accept any mathematically possible decodings, what are the end points of this mathematical possibility? What is the ceiling on mathematically possible decodings?
> [!quote]
> A small, fast program to do this makes the interpretation practical. Mathematically, however, the job can also be done by a huge theoretical lookup table that contains an observer’s view for every possible state of the simulation...
>
> The observation is disturbing because there is always a table that takes any particular situation—for instance, the idle passage of time—into any sequence of views. Not just hard-working computers, but *anything at all can theoretically be viewed as a simulation of any possible world*!
>
> A simulation, say of the weather, can be viewed as a set of numbers being transformed incrementally into other numbers. Most computer simulations have separate viewing programs that interpret the internal numbers into externally meaningful form, say pictures of evolving cloud patterns. The simulation, however, proceeds with or without such external interpretation. If a simulation’s data representation is transformed, the computer running it steps through an entirely different number sequence, although a correspondingly modified viewing program will produce the same pictures. There is no objective limit to how radical the representation can be, and *any simulation can be found in any sequence*, given the right interpretation.
And just like that, Moravec's Machine has produce a very unsettling chain of reasoning. He has just set Pandora's Box in front of us, and now he's going to open it.
###### The Final Link: Pandoras Box is Full of Conscious Rocks
%%TODO: Steelman this "entry point" into the idea that a physical rock could simulate something. He is trying to show that a simulation and the physical substrate that it is being simulated on can be quite disconnected. Try and bolster this argument to start creating tension for the reader%%
%%TODO: Add a paragraph where we "take stock" of where we are%%
We are about to be rewarded for our efforts in following this argument. To recap, here the general structure of Moravec's chain of reasoning:
1. He is a physical fundamentalist.
2. Given that, he is not able to refute Solipsism. Everything we experience could be part of some fabricated simulation. Simulations are defined by their internal relationships.
3. Consciousness can be simulated.
4. Simulations need to be encoded in physical processes. Something is an encoding if it can be decoded.
5. We should accept all mathematically possible decodings.
6. Anything can be viewed as a simulation of any possible world.
And we now arrive at the final link in the chain. Moravec is about to claim that, because a simulation must be encoded via a physical process, and we should accept all mathematically possible decodings, a *rock* can be interpreted as a simulation of a *conscious mind*:
> [!quote]
> Perhaps the most unsettling implication of this train of thought is that anything can be interpreted as possessing any abstract property, including consciousness and intelligence. Given the right playbook, the thermal jostling of the atoms in a rock can be seen as the operation of a complex, self-aware mind. How strange. Common sense screams that people have minds and rocks don’t. But interpretations are often ambiguous.
>
> No particular interpretation is ruled out, but the space of all of them is exponentially larger than the size of individual ones, and we may never encounter more than an infinitesimal fraction. The rock-minds may be forever lost to us in the bogglingly vast sea of mindlessly chaotic rock-interpretations. Yet those rock-minds make complete sense to themselves, and to them it is we who are lost in meaningless chaos. Our own nature, in fact, is defined by the tiny fraction of possible interpretations we can make, and the astronomical number we can’t.
%%TODO: Add a nice closing paragraph summarizing this steel man. Also, tie it in to your intro (try and stir up the readers emotions)%%
And just like that Moravec has argued for a complete overhaul of our world view. Not only are rocks and pinecones and coffee mugs conscious, but any simulated world can be found in any simulated object, and they are all equally real. This is a deeply unsettling proposition.
Now you may be thinking "I am unfazed, this argument is clearly nonsense. Rocks don't have minds and simulations don't exist everywhere we look. That doesn't make sense." But *that* is an exceedingly poor counter argument. This was the exact structure of the argument that [The Inquisition made in response to Galileo's Heliocentric Theory](Galileo%20vs%20the%20Inquisition.md). The earth does not *feel* like it is hurtling through space at 66,000 miles per hour, while moving at a speed of 1040 miles per hour around its rotational axis. To say that it does is to make our theory more complex and contradict clear intuition. What grounds could we possibly have for doing that?
And yet today we know that Galileo was correct. So clearly arguing against an idea on the grounds that it is not intuitive and contradicts common sense is not a strong approach.
But as you read Moravec's argument, you may have found yourself growing increasingly irritated at just how slippery it was to pin down. The machine appeared to churn out locally reasonable and consistent bits of reasoning. Moravec seemed to just be putting on a masterclass in [First Principles Thinking](First%20Principles%20Thinking.md)—starting from the fundamentals, he built up his argument piece by piece for all to see.
Well, let me put your mind at ease: Moravec's argument is rotten to the core. Let me show you why.
# Interlude: My Principles of Reason
We have reached the end of my steel manning of Moravec's argument. I have already laid my cards fully on the table: I will be decimating his argument so thoroughly that the original dread I felt upon first hearing it will be long forgotten. But in order to do so, we must take yet another brief detour. This time into my own personal philosophy: the [Principles of Reason](Principles%20of%20Reason.md)[^7].
The [Principles of Reason](Principles%20of%20Reason.md) form the bedrock from which I conduct all critical thought. They are meant to be self evident and uncontroversial. But by turning the attention of these principles towards Moravec's Machine, we will see that it is not producing a steel chain of consistent, logical reasoning. Rather, it is producing a tangled knot of explanationless contradictions.
###### Principle 1: We Should Strive For Progress
The first Principle of Reason simply states that we should strive for [Progress](Progress.md). I will define progress as moving from [Problems](Problem.md) to better problems. According to the [Principle of Optimism](inbox/Principle%20of%20Optimism.md), all problems are due to insufficient [Knowledge](Knowledge.md). Thus, in order to achieve progress, we must build knowledge. The natural question to then ask is how exactly do we do that?
###### Principle 2: To Achieve Progress, We Must Seek Good Explanations
We achieve progress via [seeking good explanations](We%20Must%20Seek%20Good%20Explanations.md). Seeking good [Explanations](Explanations.md) lead to progress because they are the best way to [solve our problems](Problem%20Solving%20Process.md). What exactly is a "good" explanation? Put another way: [An explanation is good if it has a superior ability to solve problems compared to it's rivals](Explanations%20Are%20Justified%20By%20Their%20Superior%20Ability%20to%20Solve%20Problems%20They%20Address.md).
Put broadly, [Rational Inquiry Requires Pursuing Good Explanations](Rational%20Inquiry%20Requires%20Pursuing%20Good%20Explanations.md).
* [Explanationless Progress is Impossible](Explanationless%20Progress%20is%20Impossible.md).
* [Explanations Provide a Fundamentally Different Structure and Search Process](Explanations%20Provide%20a%20Fundamentally%20Different%20Structure%20and%20Search%20Process.md)
* [Conjecture and Criticism is Preferable to Evolution for Knowledge Creation](Conjecture%20and%20Criticism%20is%20Preferable%20to%20Evolution%20for%20Knowledge%20Creation.md)
* You can try to contort yourself and find a way to make progress without explanation, but it will be very hard
* [Problem Driven Epistemology](Problem%20Driven%20Epistemology.md)
* [Reductionism](Reductionism.md)
* [Hierarchy of Theories](Hierarchy%20of%20Theories.md)
1. Progress (move from problem to better problem). We do so via building knowledge.
2. We build knowledge via seeking good explanations.
---
At the core I am really trying to argue for two things:
1. [We Must Seek Good Explanations](We%20Must%20Seek%20Good%20Explanations.md).
2. We must *always do this*—[There Are No Exceptions](We%20Must%20Seek%20Good%20Explanations.md#There%20Are%20No%20Exceptions)
I believe that (1) is easier to argue for than (2). So in 3 paragraphs or less, argue for (1). Address that actual counter arguments that you expect to hear. Not just a hypothetical one. You want a cohesive narrative, not a meandering one that reads like a law document.
I expect the counter argument to go something like: "Yes, I completely agree that we should seek good explanations. And in most areas of universe that is the correct thing to do. But there are small regions and situations where seeking good explanations does not apply—we know so little about it that the *best* explanation is simply to appeal to what is logically possible. That is the safest thing to do."
We can interpret his counter argument in two ways:
1. There really is some boundary within which seeking good explanations does not apply
2. There is no boundary—in fact, he may argue that he *is* seeking the best explanation. But based on Occam's Razor the simplest explanation is best
**Counter Argument Against (1)**
We have a theory/explanation (based on a deep, solid argument) that seeking good explanations should hold everywhere. Our argument consists of: arbitrary boundaries lead to bad philosophy and the entrenchment of error. This is not only false but will also prevent the growth of knowledge. With that said, it isn't even really an aspect of the argument that states *it must apply everywhere*. It is actually the case that our explanation is best *if it applies everywhere*, for saying it only applies sometimes in certain places is just an arbitrary exception that worsens the theory. This is similar to [General Relativity](General%20Relativity.md). The theory is argued to hold everywhere. Why? Couldn't it be that some tiny area of some galaxy that we have yet to observe is outside the laws of GR? Well certainly that logically is possible! But it would *ruin* the entire explanation of gravity. It would solve no problems and create *many* new problems. If we observe this to be the case then we will figure it out. But postulating it without a good argument is actively bad.
Thus, our explanation of why we should *always* seek good explanations has no exceptions. We use it everywhere.
[Explanations Imply Consequences](Explanations%20Imply%20Consequences.md). They have a structure. As shown in [Argument Is How We Achieve Justification](Argument%20Is%20How%20We%20Achieve%20Justification.md), if you try and sever parts of an explanation you almost always create a *worse* explanation—and you may potentially even ruin it.
In this case, postulating an arbitrary exception fully *ruins* the explanation that "we must seek good explanations and explanation holds everywhere". It creates many new problems and solves none.
**Counter Argument Against (2)**
Notice that the argument (2) is similar to the argument for god. We could equally say "we know so little about before the big bang. Isn't the simplest explanation that 'God created the big bang'?". And of course this is a terrible explanation—it is easy to vary. We could just as easily say the tooth fairy did it. HM's approach could be used to justify solipsism as well, or the simulation hypothesis! Again it is really saying, since something *could* be true, can't I therefore argue that it is true? Isn't the simplest explanation that it is true?
Well that is doing a ton of work. Moving from something being logically possible to being actually true is a massive leap. We get their via good arguments and explanations. Has HM provided any argument or explanation for *why* this should be the case? No!
His approach could be used to argue for anything! Making it a bad explanation, again.
This is a classic example of [Logical Possibility Pumping](Logical%20Possibility%20Pumping.md). There is a difference between first principles thinking and dancing around with logical possibility. First principles thinking is based on *argument*. Logical possibility is based on, well, logical possibility. One is following the [Principles of Rationality](Principles%20of%20Rationality.md), the other is not. One seeks good explanations, the other does not.
Moravec's logic machine is classic possibility pumping. And we arrive at this bad outcome because he
HM, no matter how he tries to squirm out of this, is breaking this rule with an exception.
# The Prevailing Argument
%%TODO: I think it would be a good idea to add the prevailing argument here. Basically, what are good definitions an explanations of simulation, encoding, decoding, interpretation, etc. This will be the pillar we can count on as we show all of the flaws of his argument. In terms of a window / cohesive narrative, would this be even better suited for before my steelman of moravec? Note: I need this prevailing argument in order to ensure that I provide the proper window the world%%
Up until now I have been solely trying to [Steelman](Steelman%20Argument.md) Moravec's argument, [taking it seriously on it's own terms](Take%20Theories%20Seriously%20on%20Their%20Own%20Terms.md). In a moment we will see that it is truly unsalvageable. But, my [Explanation](Explanations.md) for why will depend on showing that there is a *better* explanation available to us. Explanations are meant to solve [Problems](Problem.md)—they are [Are Justified By Their Superior Ability to Solve Problems They Address](Explanations%20Are%20Justified%20By%20Their%20Superior%20Ability%20to%20Solve%20Problems%20They%20Address.md). By being constructive it will be easier for use to criticize the flaws of Moravec's argument in a concrete way. So let me to sketch out the prevailing, standard explanation at play here.
###### Systems
I will define a [System](System.md) as some set of components following some set of [Rules](Rules.md). Systems can be *abstract* or *physical*. By abstract I simply mean non-physical but real according to our best explanations. An abstract system could be the mathematical system of prime numbers, the abstract [Program](Program.md) governing the flight simulator, or the abstract laws of physics governing Venus. You can think of it like a blue print of the entities of the system and the rules that they follow. A physical system could be a [domino computer](The%20Domino%20That%20Didn't%20Fall.md), a physical [Flight Simulator](Flight%20Simulator.md), or the planet Venus. Note we can always view physical systems as performing a [Computation](Computation.md), for computation is a [physical process](Computation%20is%20a%20Physical%20Process.md) [following rules](Computation%20is%20Following%20Rules.md).
All physical systems have an abstract counterpart. In the case of the Domino Computer, this is a physical system of spring loaded dominos that corresponds to the abstract set of rules governing the prime numbers. This system of abstract rules does not exist *physically*—you will never be out for a walk and trip over one. However, they can be instantiated in physical substrates—and they must be in order to become operational. For [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md) and [Computation is also our Window to the Abstract](Computation%20is%20the%20Window%20to%20the%20Abstract.md).
All abstract systems have an infinite number of physical counterparts. Consider the abstract system of the game of chess. This abstract set of rules can be instantiated on a wooden chess board with wooden pieces. But the pieces could equally well be made of ceramic, marble, or stone. Or the game could be instantiated on your laptop, with charges on chips representing the abstract rules and state. Due to [Computational Universality](Computational%20Universality.md) whether we instantiate the abstract system on a on a desktop computer, a [Human Computer](Human%20Computer.md), or [a set of spring loaded dominos](The%20Domino%20That%20Didn't%20Fall.md) will not matter. They all will correspond equally well to the abstract system, assuming the physical system can act as a universal computer.
###### Simulation
Given that we have defined what a system is, let us now move to simulation. A [Simulation](Simulation.md) is a process in which one system $A$ is trying to accurately render or represent another system $B$. Simulation utilizes computation and computation is a physical process, thus simulation is physical. Because simulation inherently involves a relationship between an abstract and physical system we can think of a simulation being composed of two layers.
There are two main types of simulation we must be aware of. The first is when we have an abstract system $A_1$ being simulated by a physical system $P_1$. Here $P_1$ is trying to render the rules of $A_1$ as accurately as possible.
The second is when we have a physical system $P_1$ attempting to simulate a physical system $P_2$. This works as follows. All we have access to is $P_2$. It corresponds to an abstract system $A_2$. We conjecture a system $A_1$ that ideally emulates $A_2$. We then instantiate $A_1$ in $P_1$, thus simulating $P_2$. Thus to properly simulate $P_2$ it must be the case that $A_1$ accurately emulates $A_2$, and $P_1$ accurately simulates $A_1$. This means there are two sources of possible error, $e(A_1, A_2)$ and $e(P_1, A_2)$.
%%TODO: generate an image of this%%
The error $e(A_1, A_2)$ is most prevalent when we either don't have a good conjecture for what $A_2$ actually is, or it is so complex that we must make $A_1$ only approximate it. Examples of the former may be simulating gravitational effects at quantum scales—we simply do not have a good conjecture for what occurs at that level in terms of gravity. Examples of the latter may be simulating incredibly complex quantum dynamical systems.
The error $e(P_1, A_1)$ is most prevalent when the physical system $P_1$ struggle to capture $A_1$. An example of this would be if we were trying to simulate Niagara Falls, but instead of using a classic computer we chose to engineer a near replica. However, we chose to do so in the Yukon Territory of Canada where the riverbed consistency is quite different from that of Western New York. This may lead to a poor representation of $A_1$.
Note that this is similar to the [Center Court at Wimbledon](Center%20Court%20at%20Wimbledon.md).
Notice that the simulation is entirely indifferent to the substrate we have chosen to run it on. For instance, imagine we are creating a weather simulation on your laptop. You have access to the physical weather observed on earth. This is governed by the [The Laws of Physics](The%20Laws%20of%20Physics.md). However, we don't have access to the true laws, so we approximate them via our best conjecture. We then instantiate these laws physically on the computer in order to run the simulation. Notice that inside of our computer there is physically no rain, lightening, thunder, hurricanes or tornados. Physically we can see that these systems could not be more different. However, what we have is a correspondence between the *abstract* systems.
###### Encoding
I have yet to answer how the abstract rules of $A_1$ actually get instantiated in $P_1$. This is via the process of [Encoding](Encoding.md). Encoding is just the process of taking some abstract system and representing it in some physical medium. In other words, it is about mapping one system onto another. When we then instantiate and execute this, we have our simulation. Thus we can update our definition of simulation to be the physical *running of rules* that correspond (approximately) to a system of interest. Encoding is a static, while simulation is dynamic.
As mentioned earlier, a [Program](Program.md) is a *specific type* of abstract [System](System.md) that is designed to be instantiated and executed by a physical system (e.g. a computer). Encoding is best understood via an example. Imagine we have an abstract weather simulation program. We have encoded it to run on MacOS. However, we could also encode that to run on Windows. Both encodings could be then be instantiated and executed on their respective physical machines.
For our purposes encoding and simulation are very similar—simulation is just taking a given encoding and pressing "run".
But what exactly makes an encoding—and thus a simulation—*good*? Imagine we are trying to [ simulate the Earths weather via terraforming Venus](Terraforming%20Venus%20to%20Simulate%20Weather%20on%20Earth.md), but we are never able to cool the surface of Venus below it's current ~870 degrees Fahrenheit. In that case the resulting simulation of Earths weather will be so different from Earths actual weather we may be hesitant to even call it a simulation.
A good encoding is one that minimizes the difference between the abstract rules of $A_1$ and the rules physically instantiated by $P_1$. It is one that represents the source system faithfully. More specifically, a good encoding is one that creates an [Isomorphism](Isomorphism.md) between $A_1$ and $P_1$. For our context an isomorphism just means there’s a one-to-one mapping where higher-level structures and relationships are preserved. The systems correspond in a way that keeps
the essential features intact.
Consider the following expression:
$2 + 3 = 5$
We could encode that in a way that preserve structure and relationships via the following string:
$\text{— — p — — — q — — — — —}$
Both are just strings of symbols, but we may notice there is a *correspondence* between them. We can interpret the second as having the string '$\text{— —}$ ' corresponding to $2$, the $p$ corresponding to $+$, the $q$ corresponding to $=$, and so on. There exists an isomorphism that preserves the higher level structure and meaning between the two statements.
%%TODO: Consider bringing in "What is a better explanation of encoding from [What is a better explanation of encoding?](Active%20Project/V1/Outline/Prevailing%20Argument.md#What%20is%20a%20better%20explanation%20of%20encoding?)%%
###### Decoding
What about [Decoding](Decoding.md)? Well in a trivial sense, once you have _encoding_, decoding is just the reverse process. In this case it is just going from our encoded string expression, back to our original expression.
%%TODO: add visual with arrows representing encoding and decoding for this%%
Decoding is fundamentally an act of *information revealing*. A decoding mechanism doesn't create the meaning but rather makes it accessible. Consider a record player revealing the music encoded in the grooves of a record. The encoding has an underlying structure that the decoding process exploits.
Now imagine we did not have access to the original expression—we only can interact with the new expression. Thus we have a statement which we believe has information *encoded in it* that we would like to *decode* in a way that we can then interpret. In a case like this, will any decoding work? For instance, what about the following expression—would it be a reasonable interpretation of the encoded statement?
$\text{2} = \text{3} \text{ taken from} \text{ 5}$
This decoding is a meaningful interpretation for there is an isomorphic mapping between it and the encoded statement. What about this statement?
$\text{apple apple} \text{ bomb} \text{ apple apple apple} \text{ horse} \text{ apple apple apple apple apple }$
Here we have a consistent symbol replacement: $\text{apple} = \text{—}, \text{bomb} = \text{p}, \text{horse} = \text{q}$. But we have not preserved any of the higher level structure! The original two statements both captured essential meaning about numbers, adding them together, and their equivalence. The third statement has none of that higher level structure—it is pure nonsense.
###### Interpretation
What we can see is that for a given statement there are effectively *infinite* ways to decode it. Some of these decodings yield meaningful [Interpretations](Interpretation.md), some yield nonsense. But what is an interpretation? An interpretation is simply an explanation that we conjecture.
There isn't necessarily a single *true* interpretation of a system. However, to be a *valid* interpretation it must be backed by a *good explanation* %%TODO: Reference DD FOR—119,120%%. We cannot just decode and interpret a statement however we'd like! It must be because our best explanations tell us this is a good interpretation. This means it must be *hard to vary*. For example, one of the reasons the apple, bomb, horse decoding is so bad is because it can be easily varied. Why an apple and not a banana? Why a horse and not a cow? On the other hand, in the statement $\text{2} = \text{3} \text{ taken from} \text{ 5}$, if we swap out any symbol with another it will yield a worse interpretation.
One of the most beautiful examples of this is show in [Godels Incompleteness Theorems](Godels%20Incompleteness%20Theorems.md). Via a method known as Godel-Numbering, Kurt Godel showed that statements of number theory can be interpreted on two levels: as statements about numbers and as statements about the system of numbers itself. But he did *not* show that statements about numbers can be interpreted about *anything*! Quite the opposite—his numbering scheme was an incredible specific, hard to vary form of encoding. Interpreting a statement based on this scheme constituted a great explanation.
Let us now bring this back to simulations and systems. We can imagine looking at some physical system and wondering: have the intrinsic rules of a simulation been encoded in this physical system? One may ask "but wouldn't we be able to tell that a physical system was simulating something else? What about Venus simulating the Earth, certainly we could see the resemblance after all this terraforming?"
Recall our talk of systems—specifically being inside vs outside the system. If some physical system $P_1$ is simulating $A_1$, we are not part of either system—we are external to both. Thus in order to interact with the simulation $P_1$, we must have some external viewing program—a decoding process— to help do so. This requires decoding $P_1$ into a form we can work with.
In the case of Venus simulating Earth, the "external viewing program" is just our eyes looking through telescopes, and images taken via cameras). But now imagine the [Autoverse](Autoverse.md). It's rules are so complex that we could rightly call them an internal set of [laws of physics](laws%20of%20physics.md). At any time step all we have access to are a list of numbers. These numbers represents molecules, but if we want to *visualize* a molecule from the autoverse, we must decode the simulation into a form we can work with and see.
Thus sometimes interpretation is straight forward—it simply relies on our eyes. In these cases we hardly notice an interpretation is occurring at all. However, in some cases it can be very challenging without a specific decoding program.
Back to our question: we are looking out at some physical system and wondering: have any intrinsic rule been encoded, yielding a simulation? How might we determine this? Could we argue that *any decoding process* would be equally valid?
No! In this case we only have one course of action: come up with an *explanation* for what might be occurring inside that system. We can attempt to generate decoding procedures based on our best explanations. Any decoding procedure simply won't do. All interpretations are not valid, only those are which are a consequence of our best explanations.
At this point one may reasonably ask: the interpretations we arrive at via good explanations—are they *real*? While I would like to avoid the [Essentialist Trap](Essentialist%20Trap.md) and spend the rest of this essay debating just *what is real*, we can address this via a great criterion provided by David Deutsch in the Fabric of Reality, namely [Dr Johnsons Criteria](Dr%20Johnsons%20Criteria.md). This states that is an entity is [Complex](Complexity.md) and [Autonomous](Autonomous.md) according to our simplest explanation, then that entity is real.
Consider the [primality testing of the number 641 via dominos](The%20Domino%20That%20Didn't%20Fall.md). We believe that the real reason the final domino will be up or down will depend on certain abstract entities, such as primality, the natural numbers, and the primality of $641$. These are not physical, but they impact a physical entity, namely the last domino. We are then [Forced To Take a Position](Forced%20To%20Take%20a%20Position.md): are these non physical entities ([Abstractions](Abstractions.md)) real or not? If they are not real, we must explain how non-real entities interact with real ones. If they are real, then they fit right into our best explanations-no additional explanation is needed. Thus, classifying them as real is the better explanation! To classify them as not real would just leave something unexplained - namely, by what mechanism do "unreal" entities interact with real ones.
Thus our simplest explanation would argue that any simulation can be viewed as at least two simultaneously real things. The first is a physical system obeying the laws of physics. The second is a physical system instantiating a higher level set of abstract rules. %%TODO: Reference DD FOR—119,120%%
Why is this important to touch on? Because I am claiming that interpretations are core to everything we experience—this includes our imagination, science, reasoning, thinking, all forms of external experience.
%%TODO: answer if VR is equivalent to simulation in the end of the chapter 5 writings%%
%%TODO: Reference DD FOR—119,120%%
###### Intentionality
The final concept that we have yet to discuss is [Intentionality](Intentionality.md). Consider a physical system $P_1$ that is attempting to simulate $A_1$. It has an *intention* and is trying to match $A_1$ as closely as possible. But what if $P_1$ still does a very poor job of rendering $A_1$? Take our Venus example: say we’re trying to make Venus simulate Earth, but Venus still reaches 1,000°C every day. That is a terrible simulation of Earth. We’re not capturing the key features of what makes Earth Earth.
This shows that [Intent](Intent.md) alone is not enough to classify a physical system as a simulation. But what is intent good for then? Intention matters _insofar as_ it guides criticism. If I know the intent was for Venus to simulate Earth, I can start criticizing the attempted simulation and coming up with an explanation for if it is actually a simulation or not. Without knowing this intent, I may never even conjecture that Venus was trying to simulate Earth at all.
# Moravec's Machine Creates a Chain of Contradictions
%%TODO: should I call this something like "the chain snaps: cracks in moravecs reasoning"%%
%%TODO: really want to show your window into the world here%%
%%TODO: Determine, outside of simply not seeking good explanations, what is that main / root error Moravec makes%%
It is hard to know where to begin in criticizing Moravec's argument. A closer inspection reveals that it is muddled and mired in [Contradiction](Contradiction.md) and bad [Explanations](Explanations.md). We must pick an entry point, and I'll start us off with a key contradiction he has made.
###### Contradiction at the Core of Simulation
%%TODO: other title: "The Morass of Contradictions: Simulation"%%
In order to arrive at his claim that a rock can be interpreted to be a simulation of a conscious mind Moravec is entirely reliant on two different, contradictory definitions of simulation. At one point during the essay he argues that:
> [!quote]
> The simulation proceeds with or **without** such external interpretation.
In other words, a simulation is defined only by it's *intrinsic rules*—it is a self contained system that does not depend on any external observer. However, he then goes on to state:
> [!quote]
> What does it mean for a process to implement, or _encode_, a simulation? Something is palpably an encoding if there is a way of **decoding** or translating it into a *recognizable* form.
For a process to *implement* a simulation is to *be* a simulation. This is because at core, according to his first definition (and the standard definition I would agree with), a simulation is defined solely by it's intrinsic rules. So wherever those intrinsic rules are instantiated *is* that simulation.
But wait, this means that he just said the following: a process *is* a simulation if it can be *decoded* into a *recognizable* form. And there is that sneaky contradiction. He is both arguing that a simulation is entirely self contained, but also that a simulation can sometimes depend on if it can be decoded into a recognizable form. This would specifically require some external observer or program.
By slipping in a contradiction to his core definition of simulation, Moravec has effectively opened up the door for all sorts of bad ideas.
Suppose he were to rebut against this as follows: "Well I didn't actually ever explicitly come out and say 'Simulation does not depend on an external observer. Simulation does depend on an external observer. My argument was more nuanced than that. You are misinterpreting me'".
Fine. Let us [Take His Theory Seriously on it's Own Terms](Take%20Theories%20Seriously%20on%20Their%20Own%20Terms.md) and [Force Him To Take a Position](Forced%20To%20Take%20a%20Position.md)—for [Logical Consistency Forces Taking a Position](Logical%20Consistency%20Forces%20Taking%20a%20Position.md))—using *exact quotes*. He wrote:
> [!quote]
> Given the right playbook, the thermal jostling of the atoms in a rock can be seen as the operation of a complex, self-aware mind.
We now ask him: Do simulations exist without interpretation? If he answers yes (as he originally claimed), then his statements about rocks are false.
Suppose he answers "No, simulations actually do require external interpretation". Well then that means his claim here is false:
> [!quote]
> A simulated world hosting a simulated person can be a closed self-contained entity. It might exist as a program on a computer processing data quietly in some dark corner, giving no external hint of the joys and pains, successes and frustrations of the person inside.
There is no escaping the fact that his argument has a contradiction at it's core. But of course he could still respond: "You misunderstand me still! What I actually was arguing was that *sometimes* a simulation depends on external interpretation, and *sometimes* it does not".
To start, this is a classic bad argument, and a retreat from explanation. Note that the concept of simulation itself is a form of explanation. It is defined in a specific way to solve specific problems. And this arbitrary counter argument that he is parrying us with spoils the entire explanation of simulation! This is seen more fully in [Argument Is How We Achieve Justification](Argument%20Is%20How%20We%20Achieve%20Justification.md).
%%TODO: Bring this in — but wait to bring in until you see how much you pull in to your interlude principles of reason section, and the explanation at the end%%
To say that simulation sometimes does and sometimes does not depend on external interpretation destroys the original definition and purpose of simulation. It was entirely built upon the idea of a set of rules intrinsic to some system being executed. Moravec is of course welcome to define anything however he likes, but he does not get to keep the original definition and explanatory power of the standard view of simulation, if he alters it in such a way that breaks it! Also note that his updated definition of simulation solves no problems. He has complicated the original definition by adding an arbitrary case where it does not hold, and nothing has been gained.
There is one final move he may try and make: "No, no, no. You have misinterpreted me yet again! When simulation depends on an external interpretation, then that external interpreter has become part of the system. The combined system now has intrinsic rules that have a consistency".
At this point he is contradicting half of what he said throughout his essay, but no matter. If he chose to do that we could keep playing this game. What is an external interpreter? It is simply a [Program](Program.md). Once this program becomes complex enough to represent the simulation itself, then we don't need the rock anymore. It provides nothing. Trying to include is just a more complex version of the simple external interpretation program that is itself a simulation. The rock serves no purpose and by [Occam's Razor](Occam's%20Razor.md) it can be discarded. This is exactly the same form as [Argue Against Solipsism: Take it Seriously](Defend%20Science%20by%20Arguing%20Against%20Arbitrary%20Boundaries.md#Argue%20Against%20Solipsism%20Take%20it%20Seriously).
###### Encoding: A Slippery Slope Towards Tautology
Having just demonstrated there is no escape from the contradiction plaguing Moravec's arguments, let us explore how he arrived there. We have already seen the quote twice. He has specifically defined encoding entirely in terms of an entities ability to be decoded:
> [!quote]
> Something is palpably an **encoding** if there is a way of **decoding** or translating it into a recognizable form.
To show why this is such a problem, let's talk through what the standard definition of an encoding is. And encoding is the process of representing information in a specific physical system. If we were to encode a simulation, we would need to encode it's key properties. A simulations key properties are it's intrinsic rules. That is what *makes up* the simulation. So, in order to properly encode a simulation, a physical system must encode it's rules.
Returning to Moravec's definition, we see that it is entirely sidestepping the standard definition and replacing it with one that has absolutely no explanatory power. He has opened up the door for the argument that a process encodes a simulation if there is a way of decoding it. This is in direct [Contradiction](Contradiction.md) to his previous definition that a simulation depends only on intrinsic rules! As it stands, the only way he can salvage this is *if* he defines decoding in such a way as to carry the entire weight of the explanation that encoding once held. As we will see, he never actually addresses decoding properly and it is this move that allows him to end up interpreting anything as anything.
At its core, Moravec's definition is saying "an encoding is something that can be decoded". This is perfectly analogous to saying "a lock is something that can be unlocked". While both of these statements are true, they provide no new information. Depending on how they are read they are either a [Tautology](Tautology.md) or [Circular](Circular%20Reasoning.md). If we read this statement analytically, it is a *conceptual tautology* for it provides us with no new information or knowledge about the world. If we read this statement is meant to be informative, it is *circular* for it defines a word by appealing to itself. Either way, it provides us with no independent content.
Lets explore this further. Consider a conceptual reading of "a bachelor is an unmarried man". The predicate is contained in the subject. If you already know what a bachelor is, the "unmarried man" provides *no new information*. It is of course true, but entirely vacuous. The same applies to "an encoding is something that can be decoded". If you already know what an encoding is, the predicate of this statement provides no new information. In this way it is a conceptual tautology.
Now consider someone who has no idea what a lock is. We can try and provide them an explanation in the Moravec style by saying "a lock is something that can be unlocked". At first this may appear to provide information. However, we are using *unlocked* to explain the concept of a *lock*. But *unlocked* already presupposed the concept of *lock*. This is entirely circular. The term we are trying to define (lock) is used inside its own definition (via unlocked). In other words we are trying to explain something but our explanation *relies on* the thing being explained. This is informationally empty.
In a sense it doesn't really matter whether we describe this as a tautology or circularity. The common thread between them is that they both [Lacking Explanatory Content](Lacking%20Explanatory%20Content.md). We learn nothing about encoding via this definition that merely shifts explanatory responsibility to decoding.
To be fair, this need not be a bad thing. If Moravec can then define decoding in a way that shoulders the explanatory burden generally taken up by encoding, this is no problem. More generally, he is free to define anything anyway he'd like—he can redefine encoding and decoding as he sees fit. Why then am I making such a big fuss about this? Because this redefinition is a trojan horse in argument.
%%TODO: Hmm I'm wondering if this argument is actually that strong. Could you say that it is actually helpful to know that a certain "function" has an "inverse"? There could in theory be locks that can never be unlocked?%%
%%TODO: I think I can rewrite this where the main argument is "this is explanatorily empty—depending on how we view it and the context we have, it is either a tautology or circular, or trivial. Regardless, it is contentless. You could also introduce a new term here: "explanatory tautology", "Explanatory vacuity", "Explanatorily Vacuous"%%
###### The Semantic Bait-and-Switch
Moravec never goes on to properly define decoding (more on that in a moment). By redefining encoding purely in terms of decoding, and then never properly defining decoding, he has effectively performed a [Semantic Bait-and-Switch](Semantic%20Bait-and-Switch.md). This is where one relies on the the standard, implied meaning of of a word, while redefining it in an entirely different way. Often this new definition will have *dissolved explanatory power* and removed [Constraints](Constraints.md). A reader (and frequently the writer—in this case Moravec) will keep the original, constrained definition in mind, but consider the consequences of the new, unconstrained definition.
In this case we typically think of encoding as a structured process that systematically transforms information from one form to another. A good definition of encoding should specify the rules or structure that transform information from one form to another. His definition ignores the relationship between the original information and its encoded form—it just states that if you can recover it, then it must have been encoded. Thus there is a monstrous gap between the two definitions.
Thus when we arrive at the conclusion that—according to this redefinition—everything can be viewed as an encoding, it seems quite profound. But this is only so because we still have our original definition of encoding in mind! Our cognitive habits have kicked in and used the original definition of encoding to bridge this gap.
Let's make this even more concrete. Moravec started with the concept of encoding, holding a similar definition that we would. He then redefined it in such a way that nearly all of it's meaning was erased. At this point he might as well have come up with an entirely new term for this. Instead of encoding and decoding he could equally well have called it "boogling" and "deboogling". It is technically fine to redefine a word, but this create a situation ripe for misunderstanding. Moravec never highlighted that we ought to abandon our old conceptions about what an encoding was. This is a classic example of [Bad Philosophy](Bad%20Philosophy.md).
This sleight-of-hand effectively allows *anything* to be viewed as an encoding. Of course this has removed all explanatory power, for [If a Theory Can Explain Anything, It Explains Nothing](If%20a%20Theory%20Can%20Explain%20Anything,%20It%20Explains%20Nothing.md).
His entire argument hinges on this deflated concept of encoding. It is this exact redefinition that creates the contradictory definition of simulation. Once we allow simulation to apply to anything, we get nonsensical claims such as: "_anything at all can theoretically be viewed as a simulation of any possible world_". Encoding is [doing a lot of work](Word%20doing%20a%20lot%20of%20work.md)—at this point it is carrying brunt of Moravec's entire argument.
We now must return to how Moravec handled *decoding*. As I mentioned, if he defined it in a way that preserved explanatory power then his argument would still be on firm ground. But he did the exact opposite—he descended further from explanation.
###### Decoding: The Final Descent From Explanation
As always, we must [Take Theories Seriously on Their Own Terms](Take%20Theories%20Seriously%20on%20Their%20Own%20Terms.md). It no different with definitions. He is free to define it however he'd like, but we must be vigilant and hold front of mind that decoding must carry the burden of encoding. So how exactly did Moravec define decoding?
Well that is the problem, he never actually defined it! We are left not knowing what problem it solves or how it relates to the general understanding of encoding. Are intrinsic rules and relationships still preserved?
But he did something even worse—he descended further from explanation yet again and into the realm of logical possibility. He wrote:
> [!quote]
> Why not accept all mathematically possible decodings, regardless of present or future practicality? [...] Mathematically, however, the job can also be done by a huge theoretical lookup table that contains an observer’s view for every possible state of the simulation...
This is [Logical Possibility Pumping](Logical%20Possibility%20Pumping.md) at its finest. He appears to be reasoning just as a physicist or mathematician might—peeling back unnecessary constraints and generating helpful abstractions. But Moravec's argument is only analogous in form not in function. The reality is that he has shaved off layer after layer of explanatory power. He has managed to arrive at: anything that can be decoded is an encoding, and anything can be decoded because we can just use a lookup table.
His quote is a beautiful example of the [Surely Operator](Surely%20Operator.md)—other than the fact that he didn't use the word "surely". But the flavor was the same. He was trying to sneak past us with the "why not". He doesn't argue that this is a good explanation or that this solves a problem. He tries to draw out our good nature and be let off with a pass.
But that just won't do. While still taking his argument seriously we must ask: where did this lookup table physically come from? On it's own it explains nothing. [Description Is Not Explanation](Description%20Is%20Not%20Explanation.md). A lookup table is not a fundamental process—it’s a storage mechanism. A lookup table does not transform information—it merely stores precomputed results.
To create the table in the first place, someone or something must have first encoded the mappings within it. This means that the lookup table is not a substitute for an encoding process—it is the product of one. In order to be used a lookup table must have been *physically instantiated* somehow. It is a physical data structure—whether it exists in RAM, on a hard drive, or in neurons, it must be instantiated somewhere. Of course you can posit some [Abstract](Abstractions.md) mathematical lookup table, but in order to *use* that lookup table we *must* physically instantiate it. We are dealing with [Computation](Computation.md) and while [Computation is the Window to the Abstract](Computation%20is%20the%20Window%20to%20the%20Abstract.md), at its core [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md).
Consider a lookup table mapping a conscious mind to states of a rock. There are only two ways this table could have been generated. The first is that Moravec just let the physical process's of the mind and the rock step forward in time and recorded the states to generate the lookup table. The second is that he had a program simulating a conscious mind and ran that program, mapping it's internal states to rock states. Both would create this lookup table he was after.
But this introduces a huge problem! In either case, the computation had to actually be *executed* and never once did it happen inside the rock! This is where his argument fully collapses in on itself. He is claiming a rock encodes a conscious mind—but to say that, he's relying on an external decoding mechanism (like a lookup table) that only exists if the program has already been run elsewhere. The rock never instantiated intrinsic rules and it never lead that computation to run forward. At this point we can see that the rock is entirely arbitrary. It has done nothing to improve our explanation and only complicated. This violates [Occam's Razor](Occam's%20Razor.md) and is very similar to the argument the [Inquisition put forth against Galileo](Galileo%20vs%20the%20Inquisition.md). %%TODO: Consider bringing in "Defend Science by Arguing..." %%
###### Interlude: It is not about definitions, it is about explanations
I must make a brief aside about definitions and explanations. I am quite critical of the ["essentialist" tradition](Essentialism.md) which argues that philosophical problems can be solved by finding precise definitions of terms. Taking after Popper, I argue that trying to define terms *before* doing an explanatory work leads to sterile argument and stalemate. A problem shouldn't be reduced to questions of *word usage*.
But wait, haven't I just argued extensively about Moravec's poor definitions? Yes and no. I certainly did criticize he definitions, but I was criticizing them for a specific reason: they created *worse explanations*. He progressively *removed explanatory power*.
This is a problem because terms get there meaning through their roles in explanatory theories. Good concepts *explain* something—they do not just name it. Good concepts should *solve problems*—Moravec's redefinition's solve no problems and make existing ones worse. Concepts and definitions are like theories—we move to better definitions when they solve more *problems*. We want to create rival definitions that play a broader role in an explanatory framework. This is how we create [Progress](Progress.md).
Moravec's entire argument is *verbally deep* but *explanatorily shallow*.
- A lookup table is extensional—the abstract program was _intensional_
---
Date: 20241208
Links to:
Tags:
References:
* [Simulation, Consciousness, Existence - Hans Moravec](https://www.organism.earth/library/document/simulation-consciousness-existence)
[^1]: As always, this argument is not an [Ad Hominem](Ad%20Hominem.md). I am arguing against what Moravec *wrote*. I am not critiquing him, and I may be misinterpreting his position. But without being able to discuss this with him directly, this is the best I can do.
[^3]: [Simulation, Consciousness, Existence - Hans Moravec](https://www.organism.earth/library/document/simulation-consciousness-existence)
[^4]: For if we know the state at any point in time, and we know the rules in the forward direction, those rules will have an inverse, and thus we can run the simulation in the backwards direction.
[^5]: This notion of being outside or external to a [Formal System](Formal%20Systems.md) is explored in [Godels Incompleteness Theorems](Godels%20Incompleteness%20Theorems.md). For a fun introduction to these concepts, see the writings of Douglas Hofstader.
[^6]: This idea has been explored extensively elsewhere. See the work of Stephen Wolfram or John Conway and his [Game of Life](https://www.youtube.com/watch?v=C2vgICfQawE)
[^7]: Effectively the philosophy of David Deutsch and [Critical Rationalism](Critical%20Rationalism.md)