# The Prevailing Argument %%TODO: I think it would be a good idea to add the prevailing argument here. Basically, what are good definitions an explanations of simulation, encoding, decoding, interpretation, etc. This will be the pillar we can count on as we show all of the flaws of his argument. In terms of a window / cohesive narrative, would this be even better suited for before my steelman of moravec? Note: I need this prevailing argument in order to ensure that I provide the proper window the world%% Up until now I have been solely trying to [Steelman](Steelman%20Argument.md) Moravec's argument, [taking it seriously on it's own terms](Take%20Theories%20Seriously%20on%20Their%20Own%20Terms.md). In a moment we will see that it is truly unsalvageable. But, my [Explanation](Explanations.md) for why will depend on showing that there is a *better* explanation available to us. Explanations are meant to solve [Problems](Problem.md)—they are [Are Justified By Their Superior Ability to Solve Problems They Address](Explanations%20Are%20Justified%20By%20Their%20Superior%20Ability%20to%20Solve%20Problems%20They%20Address.md). By being constructive it will be easier for use to criticize the flaws of Moravec's argument in a concrete way. So let me to sketch out the prevailing, standard explanation at play here. ###### Systems I will define a [System](System.md) as some set of components following some set of [Rules](Rules.md). Systems can be *abstract* or *physical*. By abstract I simply mean non-physical but real according to our best explanations. An abstract system could be the mathematical system of prime numbers, the abstract [Program](Program.md) governing the flight simulator, or the abstract laws of physics governing Venus. You can think of it like a blue print of the entities of the system and the rules that they follow. A physical system could be a [domino computer](The%20Domino%20That%20Didn't%20Fall.md), a physical [Flight Simulator](Flight%20Simulator.md), or the planet Venus. Note we can always view physical systems as performing a [Computation](Computation.md), for computation is a [physical process](Computation%20is%20a%20Physical%20Process.md) [following rules](Computation%20is%20Following%20Rules.md). All physical systems have an abstract counterpart. In the case of the Domino Computer, this is a physical system of spring loaded dominos that corresponds to the abstract set of rules governing the prime numbers. This system of abstract rules does not exist *physically*—you will never be out for a walk and trip over one. However, they can be instantiated in physical substrates—and they must be in order to become operational. For [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md) and [Computation is also our Window to the Abstract](Computation%20is%20the%20Window%20to%20the%20Abstract.md). All abstract systems have an infinite number of physical counterparts. Consider the abstract system of the game of chess. This abstract set of rules can be instantiated on a wooden chess board with wooden pieces. But the pieces could equally well be made of ceramic, marble, or stone. Or the game could be instantiated on your laptop, with charges on chips representing the abstract rules and state. Due to [Computational Universality](Computational%20Universality.md) whether we instantiate the abstract system on a on a desktop computer, a [Human Computer](Human%20Computer.md), or [a set of spring loaded dominos](The%20Domino%20That%20Didn't%20Fall.md) will not matter. They all will correspond equally well to the abstract system, assuming the physical system can act as a universal computer. ###### Simulation Given that we have defined what a system is, let us now move to simulation. A [Simulation](Simulation.md) is a process in which one system $A$ is trying to accurately render or represent another system $B$. Simulation utilizes computation and computation is a physical process, thus simulation is physical. Because simulation inherently involves a relationship between an abstract and physical system we can think of a simulation being composed of two layers. There are two main types of simulation we must be aware of. The first is when we have an abstract system $A_1$ being simulated by a physical system $P_1$. Here $P_1$ is trying to render the rules of $A_1$ as accurately as possible. The second is when we have a physical system $P_1$ attempting to simulate a physical system $P_2$. This works as follows. All we have access to is $P_2$. It corresponds to an abstract system $A_2$. We conjecture a system $A_1$ that ideally emulates $A_2$. We then instantiate $A_1$ in $P_1$, thus simulating $P_2$. Thus to properly simulate $P_2$ it must be the case that $A_1$ accurately emulates $A_2$, and $P_1$ accurately simulates $A_1$. This means there are two sources of possible error, $e(A_1, A_2)$ and $e(P_1, A_2)$. %%TODO: generate an image of this%% The error $e(A_1, A_2)$ is most prevalent when we either don't have a good conjecture for what $A_2$ actually is, or it is so complex that we must make $A_1$ only approximate it. Examples of the former may be simulating gravitational effects at quantum scales—we simply do not have a good conjecture for what occurs at that level in terms of gravity. Examples of the latter may be simulating incredibly complex quantum dynamical systems. The error $e(P_1, A_1)$ is most prevalent when the physical system $P_1$ struggle to capture $A_1$. An example of this would be if we were trying to simulate Niagara Falls, but instead of using a classic computer we chose to engineer a near replica. However, we chose to do so in the Yukon Territory of Canada where the riverbed consistency is quite different from that of Western New York. This may lead to a poor representation of $A_1$. Note that this is similar to the [Center Court at Wimbledon](Center%20Court%20at%20Wimbledon.md). Notice that the simulation is entirely indifferent to the substrate we have chosen to run it on. For instance, imagine we are creating a weather simulation on your laptop. You have access to the physical weather observed on earth. This is governed by the [The Laws of Physics](The%20Laws%20of%20Physics.md). However, we don't have access to the true laws, so we approximate them via our best conjecture. We then instantiate these laws physically on the computer in order to run the simulation. Notice that inside of our computer there is physically no rain, lightening, thunder, hurricanes or tornados. Physically we can see that these systems could not be more different. However, what we have is a correspondence between the *abstract* systems. ###### Encoding I have yet to answer how the abstract rules of $A_1$ actually get instantiated in $P_1$. This is via the process of [Encoding](Encoding.md). Encoding is just the process of taking some abstract system and representing it in some physical medium. In other words, it is about mapping one system onto another. When we then instantiate and execute this, we have our simulation. Thus we can update our definition of simulation to be the physical *running of rules* that correspond (approximately) to a system of interest. Encoding is a static, while simulation is dynamic. As mentioned earlier, a [Program](Program.md) is a *specific type* of abstract [System](System.md) that is designed to be instantiated and executed by a physical system (e.g. a computer). Encoding is best understood via an example. Imagine we have an abstract weather simulation program. We have encoded it to run on MacOS. However, we could also encode that to run on Windows. Both encodings could be then be instantiated and executed on their respective physical machines. For our purposes encoding and simulation are very similar—simulation is just taking a given encoding and pressing "run". But what exactly makes an encoding—and thus a simulation—*good*? Imagine we are trying to [ simulate the Earths weather via terraforming Venus](Terraforming%20Venus%20to%20Simulate%20Weather%20on%20Earth.md), but we are never able to cool the surface of Venus below it's current ~870 degrees Fahrenheit. In that case the resulting simulation of Earths weather will be so different from Earths actual weather we may be hesitant to even call it a simulation. A good encoding is one that minimizes the difference between the abstract rules of $A_1$ and the rules physically instantiated by $P_1$. It is one that represents the source system faithfully. More specifically, a good encoding is one that creates an [Isomorphism](Isomorphism.md) between $A_1$ and $P_1$. For our context an isomorphism just means there’s a one-to-one mapping where higher-level structures and relationships are preserved. The systems correspond in a way that keeps the essential features intact. Consider the following expression: $2 + 3 = 5$ We could encode that in a way that preserve structure and relationships via the following string: $\text{— — p — — — q — — — — —}$ Both are just strings of symbols, but we may notice there is a *correspondence* between them. We can interpret the second as having the string '$\text{— —}$ ' corresponding to $2$, the $p$ corresponding to $+$, the $q$ corresponding to $=$, and so on. There exists an isomorphism that preserves the higher level structure and meaning between the two statements. %%TODO: Consider bringing in "What is a better explanation of encoding from [What is a better explanation of encoding?](Active%20Project/V1/Outline/Prevailing%20Argument.md#What%20is%20a%20better%20explanation%20of%20encoding?)%% ###### Decoding What about [Decoding](Decoding.md)? Well in a trivial sense, once you have _encoding_, decoding is just the reverse process. In this case it is just going from our encoded string expression, back to our original expression. %%TODO: add visual with arrows representing encoding and decoding for this%% Decoding is fundamentally an act of *information revealing*. A decoding mechanism doesn't create the meaning but rather makes it accessible. Consider a record player revealing the music encoded in the grooves of a record. The encoding has an underlying structure that the decoding process exploits. Now imagine we did not have access to the original expression—we only can interact with the new expression. Thus we have a statement which we believe has information *encoded in it* that we would like to *decode* in a way that we can then interpret. In a case like this, will any decoding work? For instance, what about the following expression—would it be a reasonable interpretation of the encoded statement? $\text{2} = \text{3} \text{ taken from} \text{ 5}$ This decoding is a meaningful interpretation for there is an isomorphic mapping between it and the encoded statement. What about this statement? $\text{apple apple} \text{ bomb} \text{ apple apple apple} \text{ horse} \text{ apple apple apple apple apple }$ Here we have a consistent symbol replacement: $\text{apple} = \text{—}, \text{bomb} = \text{p}, \text{horse} = \text{q}$. But we have not preserved any of the higher level structure! The original two statements both captured essential meaning about numbers, adding them together, and their equivalence. The third statement has none of that higher level structure—it is pure nonsense. ###### Interpretation What we can see is that for a given statement there are effectively *infinite* ways to decode it. Some of these decodings yield meaningful [Interpretations](Interpretation.md), some yield nonsense. But what is an interpretation? An interpretation is simply an explanation that we conjecture. There isn't necessarily a single *true* interpretation of a system. However, to be a *valid* interpretation it must be backed by a *good explanation* %%TODO: Reference DD FOR—119,120%%. We cannot just decode and interpret a statement however we'd like! It must be because our best explanations tell us this is a good interpretation. This means it must be *hard to vary*. For example, one of the reasons the apple, bomb, horse decoding is so bad is because it can be easily varied. Why an apple and not a banana? Why a horse and not a cow? On the other hand, in the statement $\text{2} = \text{3} \text{ taken from} \text{ 5}$, if we swap out any symbol with another it will yield a worse interpretation. One of the most beautiful examples of this is show in [Godels Incompleteness Theorems](Godels%20Incompleteness%20Theorems.md). Via a method known as Godel-Numbering, Kurt Godel showed that statements of number theory can be interpreted on two levels: as statements about numbers and as statements about the system of numbers itself. But he did *not* show that statements about numbers can be interpreted about *anything*! Quite the opposite—his numbering scheme was an incredible specific, hard to vary form of encoding. Interpreting a statement based on this scheme constituted a great explanation. Let us now bring this back to simulations and systems. We can imagine looking at some physical system and wondering: have the intrinsic rules of a simulation been encoded in this physical system? One may ask "but wouldn't we be able to tell that a physical system was simulating something else? What about Venus simulating the Earth, certainly we could see the resemblance after all this terraforming?" Recall our talk of systems—specifically being inside vs outside the system. If some physical system $P_1$ is simulating $A_1$, we are not part of either system—we are external to both. Thus in order to interact with the simulation $P_1$, we must have some external viewing program—a decoding process— to help do so. This requires decoding $P_1$ into a form we can work with. In the case of Venus simulating Earth, the "external viewing program" is just our eyes looking through telescopes, and images taken via cameras). But now imagine the [Autoverse](Autoverse.md). It's rules are so complex that we could rightly call them an internal set of [laws of physics](laws%20of%20physics.md). At any time step all we have access to are a list of numbers. These numbers represents molecules, but if we want to *visualize* a molecule from the autoverse, we must decode the simulation into a form we can work with and see. Thus sometimes interpretation is straight forward—it simply relies on our eyes. In these cases we hardly notice an interpretation is occurring at all. However, in some cases it can be very challenging without a specific decoding program. Back to our question: we are looking out at some physical system and wondering: have any intrinsic rule been encoded, yielding a simulation? How might we determine this? Could we argue that *any decoding process* would be equally valid? No! In this case we only have one course of action: come up with an *explanation* for what might be occurring inside that system. We can attempt to generate decoding procedures based on our best explanations. Any decoding procedure simply won't do. All interpretations are not valid, only those are which are a consequence of our best explanations. At this point one may reasonably ask: the interpretations we arrive at via good explanations—are they *real*? While I would like to avoid the [Essentialist Trap](Essentialist%20Trap.md) and spend the rest of this essay debating just *what is real*, we can address this via a great criterion provided by David Deutsch in the Fabric of Reality, namely [Dr Johnsons Criteria](Dr%20Johnsons%20Criteria.md). This states that is an entity is [Complex](Complexity.md) and [Autonomous](Autonomous.md) according to our simplest explanation, then that entity is real. Consider the [primality testing of the number 641 via dominos](The%20Domino%20That%20Didn't%20Fall.md). We believe that the real reason the final domino will be up or down will depend on certain abstract entities, such as primality, the natural numbers, and the primality of $641$. These are not physical, but they impact a physical entity, namely the last domino. We are then [Forced To Take a Position](Forced%20To%20Take%20a%20Position.md): are these non physical entities ([Abstractions](Abstractions.md)) real or not? If they are not real, we must explain how non-real entities interact with real ones. If they are real, then they fit right into our best explanations-no additional explanation is needed. Thus, classifying them as real is the better explanation! To classify them as not real would just leave something unexplained - namely, by what mechanism do "unreal" entities interact with real ones. Thus our simplest explanation would argue that any simulation can be viewed as at least two simultaneously real things. The first is a physical system obeying the laws of physics. The second is a physical system instantiating a higher level set of abstract rules. %%TODO: Reference DD FOR—119,120%% Why is this important to touch on? Because I am claiming that interpretations are core to everything we experience—this includes our imagination, science, reasoning, thinking, all forms of external experience. %%TODO: answer if VR is equivalent to simulation in the end of the chapter 5 writings%% %%TODO: Reference DD FOR—119,120%% ###### Intentionality The final concept that we have yet to discuss is [Intentionality](Intentionality.md). Consider a physical system $P_1$ that is attempting to simulate $A_1$. It has an *intention* and is trying to match $A_1$ as closely as possible. But what if $P_1$ still does a very poor job of rendering $A_1$? Take our Venus example: say we’re trying to make Venus simulate Earth, but Venus still reaches 1,000°C every day. That is a terrible simulation of Earth. We’re not capturing the key features of what makes Earth Earth. This shows that [Intent](Intent.md) alone is not enough to classify a physical system as a simulation. But what is intent good for then? Intention matters _insofar as_ it guides criticism. If I know the intent was for Venus to simulate Earth, I can start criticizing the attempted simulation and coming up with an explanation for if it is actually a simulation or not. Without knowing this intent, I may never even conjecture that Venus was trying to simulate Earth at all.