%%TODO: Determine where to put this — may be at start to help guide the reader. Or at end?%% ### Notes to Self Most people aren’t familiar with the idea of treating arguments seriously on their own terms. That means you might end up isolating some readers, because they’re like, “Wait, you just gave me all these reasons why this argument is wrong. Is there even a rival theory?” Therefore you need to show that yes, there absolutely _is_ a rival theory. Provide a prevailing approach that is solid point to anchor on. Note that [You Do Not Need To Be More Specific Than The Situation Demands](You%20Do%20Not%20Need%20To%20Be%20More%20Specific%20Than%20The%20Situation%20Demands.md). Someone *could potentially* split hairs over semantics, “In your definition of a system, what if the entities are following no rules?” But even _that_ is a rule: simply “do nothing”. So you could make this definition more and more specific to stave off more and more potential objections, but for our purposes, these objections are not the key issue and are just a distraction! They only create a less [Cohesive Narrative](Cohesive%20Narrative.md). Of course someone might offer a counterargument: “Okay, but your definition of a system is flawed.” Fine—but then the burden of explanation is on _them_. What problem does their alternative solve? If you think this definition of a system is critically flawed, then say _why_. What is missing? What part of the argument is being misrepresented because of it? Totally possible—but spell it out. If you try and predict all objections you will end up saying nothing of interest to read. So, First and foremost, you must ensure that all readers are familiar with the concept of being inside and outside the system. Thus you have to introduce the concept of a system and provide a window into this world for readers. At core, Moravec is blurring those lines. You need a clear mental model—a clear window into the world—that answers: What _is_ a system? What does it mean to be inside or outside or outside of one? --- ###### Systems and Simulation %%TODO: Potentially, referencing Popper, note that language is fuzzy and somewhat nebulous. Note that for the purposes of this argument we are really focusing on a single key aspect of simulation—the defining aspect—that it is an intrinsic set of rules%% Let us first start by defining a [System](System.md)—it will be used extensively throughout everything that follows. A system can be broadly defined as a set of interacting components governed by specific [Rules](Rules.md) or principles. These components and their rules will have a distinguishable boundary differentiating the system from the environment in which it exists. While systems can be physical (the dynamics of a water draining in a bathtub) or abstract (the system of prime numbers), we cannot interact with or gain knowledge about them in a way independent of physical processes. We interact with abstract systems via [Computation](Computation.md), for [Computation is the Window to the Abstract](Computation%20is%20the%20Window%20to%20the%20Abstract.md). And at the same time, [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md). Thus any system we wish to interact with will need to be physically instantiated at some point. Finally, system is self contained—it exists with or without external [Interpretation](Interpretation.md). %%TODO: 2-3 examples%% And just what is a [Simulation](Simulation.md)? It too is a system. It is a set of entities that evolve via the strict [Intrinsic](Intrinsic.md) [Logic](Logic.md) of a [Program](Program.md). This logic defines the [laws of physics](laws%20of%20physics.md) of the simulation. These [laws of physics](laws%20of%20physics.md) are just a set of [Rules](Rules.md). Being a system, a simulation is entirely self contained, meaning it does not require external interpretation. If you want to "peak inside" the simulation, you can do so via an external viewing program. However, you will always be on the outside looking in! You are not *intrinsic*—part of—the system. What then makes a simulation different than a generic system? For our purposes, the central aspect that distinguishes a simulation from a system is that it is attempting to *accurately render something*—it is trying to emulate some other system. Slightly more technically, if we have a system $A$ that is simulating system $B$, that means that $A$ is attempting to have it's intrinsic rules and resulting behavior render system $B$ as accurately as possible. There are two looming questions that you may have at this point. The first is this: given that a simulation must be a physical process, how exactly do we represent a simulation physically? The second is what exactly does it mean to accurately render *something*? ###### More on Physical Processes: Encoding and Decoding As mentioned, in order to simulate anything, it must be physically instantiated. What exactly is 'it'? What exactly are we representing? The 'it' is the *entities* of the system and the *rules* they follow. To instantiate it physically means to map it to some physical system in a way that is accurately preserving the entities and their relationships. We can be a bit more specific by referencing the concept of an [Isomorphism](Isomorphism.md). %%TODO: bring in "What is a better explanation of encoding from [What is a better explanation of encoding?](Active%20Project/V1/Outline/Prevailing%20Argument.md#What%20is%20a%20better%20explanation%20of%20encoding?)%% There is some system that you wish to simulate. It could be anything, but it's some system that you want to simulate in some way. To do so, you effectively need to take the rules and entities of that system, and you need to represent it in some way, such that it can be simulated. Well, we know that simulation relies on computation, and the only way that we can actually perform computation is physical, physically, because it is a physical process. So you need to somehow take the rules and the relationships and the entities of this system that you wish to simulate, you wish to emulate, and you need to instantiate it physically somehow. This instantiation process is what it means to encode something. You are encoding the key properties in a physical system, such that it accurately renders some external system. When we say we want to encode something accurately, we want it to be isomorphic. So we want the properties of our system that we are seeking to simulate to be isomorphically captured via our encoding. And that just means that there's a one-to-one mapping that preserves the key structure. You may ask: You're saying that you have some system that you wish to simulate. You're saying that you have a system that is attempting to do the simulation. But you also have said that simulation is a system and it is internal to itself. How exactly would a system that only knows about itself have any idea if it is accurately representing or rendering some other system? This is where the notion of interpretation comes in. You will always simply have a physical system. It is just that it can be interpreted in different ways. One way of interpreting it is that it is just the physical system itself. Another way of interpreting it is that some external system (me, perhaps) sees that this is actually a really nice rendering of some other system. To interpret something is to view it in a certain way based on your best explanation. So yes the system will exist objectively on it's own, as some physical system. It will churn along physically with or without any interpretation. I interpreting it that way based on my best explanation. Based on there's a good explanation for this. Because I am right to interpret it this way. So yes, this system will exist objectively on its own. It does not require an external interpreter. It will be turning along physically as you'd expect once it's been instantiated. No questions asked. However, if you want to, like, to say that it is doing something requires us to interpret it. ###### Interpretation: How do we *know* a system is trying to emulate another system? ###### What does it meant to encode and decode a simulation? %%TODO: Do I need to bring in a better definition of computation earlier in the piece?%% Due to [computational universality](Universal%20Computer.md), nearly any physical system has the ability to perform [Computation](Computation.md)—after all, [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md). Simulation is just a computation, so encoding a simulation means representing it in some physical medium such that it can then be executed. The encoding must be *intrinsic* to the system—the system must internalize the rules. Notice that *any* simulation *must* be encoded. That is because there is no way to execute computation—and thus simulation—without physically doing so. Encoding is generally treated as a structured, causal process that places information into a system in a way that follows a set of rules. This information can then be extracted by another system following the corresponding rules. Decoding is fundamentally an act of *information revealing*. A decoding mechanism doesn't create the meaning but rather makes it accessible. Consider a record player revealing the music encoded in the grooves of a record. The encoding has an underlying structure that the decoding process exploits. %%TODO: should I include isomorphism? What about inside vs outside the system?%% ###### And what does it mean to *Interpret* a simulation? An [Interpretation](Interpretation.md) is simply an explanation that we conjecture. What does it mean to [Interpret](Interpretation.md) some process to be a [Simulation](Simulation.md)? For instance consider a planetarium simulating the night sky, or a [Flight Simulator](Flight%20Simulator.md), or [Terraforming Venus to Simulate Weather on Earth](Terraforming%20Venus%20to%20Simulate%20Weather%20on%20Earth.md). Should we interpret them as *real*? [Dr Johnsons Criteria](Dr%20Johnsons%20Criteria.md) tells us to interpret as real those complex entities which, if we did not interpret them as real, would complicate our explanations. So in the case of the flight simulator or a planetarium they both should be interpreted as real. They are complex and autonomous. They [Kick Back and Require an Independent Explanation](Kicking%20Back%20Requires%20an%20Independent%20Explanation.md). However, they are not *physically real* in the same way that the night sky or a real aircraft is real. This may seem like a contradiction but it is not. As explored in the [5 - Reality of Abstractions](5%20-%20Reality%20of%20Abstractions.md), we can have different types of real - such as physically real and abstractly real. Here we have entities that are computationally real, but not physically real. What about [Terraforming Venus to Simulate Weather on Earth](Terraforming%20Venus%20to%20Simulate%20Weather%20on%20Earth.md)? In that case we have a physical system - Venus - that is meant to simulate Earth. Surely since that is a physically real system the simulation is also real? But that is not so! That Venus is simulating Earth is an [*Interpretation*](Interpretation.md). The interpretation is an [Explanation](Explanations.md) of what is occurring. This requires an *interpreter*, and [Interpreters Make Use of Virtual Reality](Interpreters%20Make%20Use%20of%20Virtual%20Reality.md). All interpretation is a form of experience, and all experience is a form of virtual reality. So we see that any simulation can be seen as a physically real system (for all simulation requires computation and all [Computation is a Physical Process](Computation%20is%20a%20Physical%20Process.md)), and also interpreted via virtual reality. An interpretation is real as an abstraction. But wait! It may appear that we have just walked right into some muck. Does this mean that one is free to interpret Venus in *any way they'd like to*? Certainly *one* interpretation of Venus is that it is simulating Earth. Another is that it is just Venus. Another is that the gods are playing some divine game and trying to align the planets in order to achieve a high score. And so on. Thus it appears that we have trapped ourselves. We have said that interpretations are real as abstractions. But now we have certain interpretations that we know are real and others that we know are not. How do we get out of this? Where did we go wrong? The answer is explained in [Dr Johnsons Criteria](Dr%20Johnsons%20Criteria.md) when discussing the reality of angels. We *don't* regard angels as real because they *do not* factor in to our best [Explanations](Explanations.md). The same holds in our example of Venus. We are right to interpret it as simulating earth because that is part of our *best explanation*! Based on our best understanding, we have explanations around the *intention* of the engineers and scientists who worked on this terraforming project. We have an explanation of their goals and how they set out to achieve them, and our explanation states that they are attempting to have Venus simulate Earth. It is incorrect to interpret it as part of some game of the gods, because that is not part of our best explanation, or any good explanation at all. It is worth pausing to really drive this point home. We can *logically* interpret *anything* to be a simulation. This is the trap that Moravec fell into (more on that shortly). While we can logically interpret anything as a simulation, it is an exceedingly bad explanation to do so. By simply starting from the [Principles of Reason](Principles%20of%20Reason.md), we take on the principle that [We Must Seek Good Explanations](We%20Must%20Seek%20Good%20Explanations.md). And if we are seeking good explanations then we *cannot* interpret anything to be a simulation. Only those whose interpretation as a simulation *improve* our explanations! Thus, under the [Principles of Reason](Principles%20of%20Reason.md), it actually is not correct to claim that we *can* interpret anything as a simulation, we just *shouldn't*. Rather, we *can't* interpret anything as a simulation if we are seeking good explanations. [Rational Inquiry Requires Pursuing Good Explanations](Rational%20Inquiry%20Requires%20Pursuing%20Good%20Explanations.md), [Explanationless Progress is Impossible](Explanationless%20Progress%20is%20Impossible.md), and [Striving for Progress is our Most Fundamental Principle](Striving%20for%20Progress%20is%20our%20Most%20Fundamental%20Principle.md). ###### Summary of the Prevailing Argument * systems * a simulation is a system that is trying to emulate something else * But this is a matter of interpretation * Which we must address because systems my by encoded or decoded --- # TODO: Clean up below (does it belong anywhere? ) My (standard) explanation solves the problems far better than HMs * I think I should include this somewhere. Not only do you want to obliterate HMs arguments on their own terms, but in the Popperian sense you should articulate the counterargument (mine, the standard view) and show why/how that solves problems more effectively * Simulation will always require running the intrinsic simulation somewhere. Saying that that then maps some physical process without explaining why is arbitrary ([Occam's Razor](Occam's%20Razor.md)) The key idea is that there will effectively be an "interpretation program" that runs the simulation *somewhere*. Better definition (mine): Simulations are not defined by the static record (noise or data) but by the rules and logic that give them dynamic coherence.*Being* vs *Encoding* is a huge issue he is sweeping under the rug. A simulation seeks to *accurately render something* ([Rendering a Virtual Environment](Virtual%20Reality.md#Rendering%20a%20Virtual%20Environment)). There is an *intent* or an *objective* - it is trying to be isomorphic to something else. Another way of thinking about this is *encoding* a simulation vs *instantiating* a simulation ([Encoding Provides The Instructions, Instantiation Is The Execution](Encoding%20Provides%20The%20Instructions,%20Instantiation%20Is%20The%20Execution.md)). Being implies instantiating. But encoding and instantiating are different things. HM's definition of simulation and encoding is bad on it's own terms (contradictory), but it's also bad because it doesn't capture a key aspect of what we generally think about simulation - that there is an *intent* to accurately render something ([Rendering a Virtual Environment](Virtual%20Reality.md#Rendering%20a%20Virtual%20Environment)). While it is true that we don't need to get hung up on a definitions, if his definition of simulation doesn't properly capture a key aspect, then we will need to account for this with another subsequent definition in some way. A good definition should account for what we care about. And in this case it does not. ##### What is a better explanation of encoding? So far I have just shown that his definition / explanation / theory of encoding & decoding is inconsistent. But is there a rival theory? There is — the common, standard definition. So far I have been taking his definition seriously on it's own terms and showing that if I do, it leads to a contradiction with other claims—and it blows up many of his main points (such as a rock being a simulation of a conscious mind). But, we could be more *constructive* and provide a *better definition* of decoding. Focus on the "Revealing" Nature of Decoding: Hofstadter views decoding as an act of "information-revealing". A decoding mechanism doesn't necessarily create the meaning but rather makes it accessible. He uses the analogy of a record player revealing the music encoded in the grooves of a record. This suggests that the encoding has some underlying structure that the decoding process exploits. * The Role of Isomorphism: For decoding to be successful and meaningful, there needs to be an isomorphism between the structure of the encoding and the structure of what it represents. The decoding process essentially maps elements of the encoding back to the elements of the original information or simulation based on this underlying correspondence. Moravec's definition, by itself, doesn't guarantee such a structured relationship * Isomorphism Requires Preservation of Relational Structure (Higher-Level Mapping): A true isomorphism, as Hofstadter describes it, involves a mapping between two complex structures where the relationships between the parts are also preserved. It's not just about mapping individual elements but also about how those elements are organized and interact within each structure. This leads to a correspondence between true statements (or meaningful configurations) in one system and theorems (or corresponding meaningful configurations) in the other Open questions: * Does the idea of being inside or outside the system relate to this? His mixing of being inside vs outside the system seems critical. If I am outside the system, I have to interpret the what is going on inside the system. That is how I give it meaning. In the standard definition it is a causal process. In its normal, meaningful sense, encoding involves: * A structured, causal process that places information into a system in a way that follows a set of rules. * This information can then be extracted (decoded) by another system following the corresponding rules * Causality matters—the encoding process actually shapes the system in a way that enables later decoding. Instead of requiring encoding to be a causal process, Moravec flips the standard definition: * He runs an external system (the simulation) first, without any predetermined encoding scheme. * Then, he injects meaning afterward by imposing an interpretation—as if the system had been encoded all along. * The so-called “encoding” isn’t really an encoding at all—it is just a post-hoc reinterpretation of arbitrary data. * The Lookup Table Doesn’t Encode—It Just Stores and Interprets. Moravec’s method is not an encoding process—it is a passive recording of states, followed by an arbitrary mapping. Incoming: Let’s say I have a system—call it system A. It’s just some physical system. It could be a human brain, a chess game, whatever. The question is: what does it _mean_ to encode this physical system? To encode it means to take the information from system A and put it into another physical system in such a way that it can be decoded—meaning, there’s an inverse transformation that lets you recover the original system. Essentially, you’re moving from one physical format to another, but you’re preserving the rules and relationships that matter. So there’s an isomorphic relationship between the original system and the new one Now here’s the key part. Suppose I take a representation of the original system—say, a brain—and I run it. I generate a bunch of brain states. Then I take those states and encode _them_ in another medium, say, using a simple lookup table. I also have the inverse of that lookup table, so I can go back to whatever medium I started with. But is that really an encoding of the system? What’s interesting is that, in this case, what you’re encoding is really just an _output artifact_. You’re encoding the output brain states—but not the rules that generated those states. You’re encoding the product, not the process. And the system shouldn’t be confused with its output. So I think there’s something important here. A proper encoding captures the rules _of the system itself_—the dynamics that govern how it behaves—and maps those into another physical system in a way that can be reversed. If all you do is take the output and build your encoding process around that, it feels like something crucial is missing. Also leaving off, diagrams in notability DH on isomorphism, meaning at multiple levels, etc [ChatGPT](https://chatgpt.com/share/e/680005d4-9048-8006-a3ad-5a64a26b367e)