# Brainstorm New Analogy for Logical Possibility Pumping I am trying to determine a better way to describe [Logical Possibility Pumping](Logical%20Possibility%20Pumping.md). Here is some incoming inspiration: * **Lifting and Drifting through Design Space**: * This comes form Dennett (pg 125 DDI). "Darwins claim is that when the force of [Natural Selection](Natural%20Selection%20is%20a%20Constraint.md) is imposed on this random meandering, in addition to drifting there is lifting" * What I like about this is that it shows that natural selection is a constraint, and without a constraint, we just get random meandering. We need constraints, and criticisms. * It shows that drifting isn't a problem inherently. But if not coupled with constraints, it is not useful. * "In the absence of natural selection, the drift is inexorably *downward* in design space" - this is because there are more bad states than good states. * New Ideas: * Drifting through possibility space ## Thoughts V1 **Working Through the Analogy** Okay, here we go. What I’d like to work through is the analogy set I’m going to use. A reasonable starting point is whether I want to keep the notion of Morovik’s machine. I’ve built a lot of terminology around it, framing how his argument creates these “links.” But now I’m realizing I don’t love this framing. First, the idea of links tends to imply a single chain, whereas I’m picturing something more like an interconnected, stitched-together fabric. A fabric is visually much closer to what I have in mind. Second, I don’t like the connotation of a “machine.” It suggests a deterministic, designed process—like an assembly line or a blueprint being executed. But what I really want to convey is more organic: drifting through a space of logical possibilities. Given that, Morovik’s machine needs to go. Honestly, even the idea of “links and chains” probably should too, at least for now. **A New Direction: Drifting Through Possibility Space** So, where does that leave me? I’m starting to like the idea of drifting through logical possibility space. It offers a helpful, vivid way to explain things to the reader. In fact, it’s appropriate to bring in Dennett’s example of Darwinian evolution: random meandering through design space is necessary but not sufficient. Similarly, drifting through possibility space is fine—there’s nothing wrong with it. You might drift into bad explanations, but even bad explanations can spark something useful. However, if you want to create something that actually solves a problem, you have to eventually apply criticism and seek good explanations. Drifting alone doesn’t get you there. Drift isn’t inherently bad—but drifting by itself doesn’t create valid arguments. And at the end of the day, argument is all we have. **Refining the Metaphor: Drift and Descent** This is clicking into place. I like Dennett’s framing of drifting and lifting through design space. He points out that without natural selection, drift is inexorably downward. Similarly, in our context, drifting without criticism leads to descent in logical possibility space—moving away from good explanations. I might describe it like this: there’s a space, an interconnected landscape of explanations, and arguments reside within it. Without constraints (i.e., criticism), you drift downward, away from truth or usefulness. **Weaving the Ideas Together** I still like the image of unraveling arguments—pulling on threads—so there’s still a weaving element here. Arguments exist within this interconnected space. We can weave them together, structure them, and pull them apart. And as we drift, we can either ascend toward better explanations or descend into worse ones. I should revisit exactly what Dennett means by “lifting” in design space to deepen the metaphor. But overall, I think this is a much better alignment with what I’m trying to express: you’re seeking good explanations, not just making legal moves through possibility space. You can often spot when an argument is just drifting—when every step is possible and “legal,” but none of it constitutes a good explanation. That’s likely what’s happening with Morovik’s argument at times. TODO: Need to work the metaphor in around *explanation*, not just design NBLM: `Design and Explanation Space Analogy` Need to think about if an explanation space makes sense, and if it would fit in with conjecture and criticism [Poppers Three Worlds](Poppers%20Three%20Worlds.md) --- ## Thoughts V2 ### **Walking Through the Core Problem** Here’s what I’m trying to work through. You’ve gone down a rabbit hole about whether abstractions exist objectively. Are abstractions waiting to be created or discovered? Honestly, for the problem I’m trying to solve, I’m not sure it matters. Let’s step back and start with the problem itself: you’re writing an essay that critiques Moravec’s approach, where he meanders through logically possible moves without arriving at a good explanation. This reminded you of Dennett’s idea of design space—how most logically possible genetic mutations lead to worse genes, not improvements. Similarly, when constructing arguments or explanations, you can always tweak, adjust, and add. Some moves are obviously bad—like if I explained the apples in my fridge this morning by saying a battle between garden gnomes and guardian angels occurred overnight. Sure, it’s _logically possible_, but it’s a _bad explanation_. Logical possibility is powerful because it merely requires freedom from contradiction—it’s far broader than physical or causal possibility. But _saying something is logically possible is not the same as making a good argument_. Stacking “this is possible” claims on top of each other doesn’t get you to truth or understanding. What I appreciate about Deutsch is that when he reasons from first principles, he’s very careful about the consequences he draws. He doesn’t just assert possibility; he focuses on what logically follows. That’s where the real scrutiny belongs. ### **Possibility Claims and Weakening Arguments** In Moravec’s case, you could say, “Sure, it’s possible that a tree encodes a simulation,” but that doesn’t mean it’s a good explanation. Often, Moravec reaches possibility claims by either introducing contradictions or by weakening existing definitions—like breaking the standard definition of encoding by suggesting anything could encode anything if you had the right decoder. Thus, when extending arguments via possibility claims, you must be cautious. You could weaken your overall structure without realizing it, undermining explanatory power or introducing hidden contradictions. ### **Constructing Arguments Without Needing a “Space”** Now, taking a step back: the reason I started thinking about concepts like World 3 (abstract entities) was to better frame how Moravec is operating. But maybe we don’t need to invoke a big “space” of arguments or explanations. Instead, think of argument construction as _building_—like weaving fabric. You stitch together strands; different parts of the argument rely on others. Pull one critical thread, and the whole thing can unravel. This metaphor feels more intuitive and avoids unnecessary baggage about defining a “space” of arguments. ### **Does the Concept of “Design Space” Add Anything Here?** You might wonder whether it’s worth bringing in Dennett’s idea of design space. Why did Dennett feel the need to introduce that concept, rather than just talking about combinations of genes? Probably because genes are concrete—sequences of four nucleotides. It makes sense to talk about all possible permutations, thus defining a design space. But with arguments, it feels fuzzier. Arguments can draw on evidence, values, assumptions—things we might not even have invented yet. It’s harder to define a _closed space_ of all possible arguments. Thus, it may be better to avoid invoking a “space” at all here. Focus instead on the process: constructing arguments, step by step, evaluating each addition for strength, coherence, and explanatory power. # Logical Consequences and Decoding Explains Anything Two new concepts are bubbling up in my mind: 1. [Logical Consequence](Logical%20Consequence.md) are key: I don't think I did a good enough job in draft one of describing what it means for something to have *consequences*. Consequences of a logical system or argument are out of your control. You don't get to chose them. Close examination shows that HM's argument had many consequences that he did not foresee. 2. The right way to think about argument and explanation is that they are *constructed*. I don't need to say that they move into some massive "space of all arguments". And once constructed, they have consequences that are out of your control. What I want to call out about HM is that his *construction* was poor. He would build his argument via *removing* explanatory constraints and jumping towards logically possible ideas. **Tree Structures** * I am liking the idea more and more of a tree / structure that is built upwards. Each node in the structure bears some load. Certain nodes are more load bearing than others. Each node implies consequences. Thus each node can be criticized. Visually I could draw this as a structure with the components HM constructed colored in black, and the consequences that he did not chose in grey. * An interesting analogy to explore could be one where I look at how "load" is shifted around the structure. **Could decoding be used to explain *anything*?** * HM's definition of decoding can't be used to explain *anything*—not in the same way that "the gods did it" does. But if your definition of encoding now includes everything, then it tells you nothing. It also erases the original semantics and is a worse explanation (solves no problems and creates new ones—where was the lookup table generated) **Descent From Explanation: Removing Explanatory Content Means Removing Consequences** * There are two forms this can take: 1) You replace a good explanation with a worse one, and 2) You remove explanatory content altogether, *carving off consequences* so the theory no longer implies anything checkable * Explanatory content means something that is _hard to vary_ and has _many logical, testable consequences_. Theories rich in explanatory power make bold predictions—ones that can be criticized, falsified, refined. * Consider the example: "_The planets move in elliptical orbits because the gods did it._" — What specific consequences follow from that? None. It doesn’t generate predictions. You can’t test it. * In contrast, Newtonian gravity allows you to calculate where a planet should be at any given time. It has vast, detailed consequences that are logically implied and empirically testable. * To descend from explanation, then, is to start replacing nodes in your argument with elements that reduce or eliminate consequence. You _weaken_ the structure—either by vague definitions or by overgeneralizing to the point where the theory no longer says anything concrete. * This is what Moravec appears to be doing. He redefines concepts like “decoding” in a way that removes their constraint—making them so broad that they apply to everything, and thus explain nothing. * The danger here is subtle. It’s not that he’s offering incorrect conclusions—it’s that his framework has no consequences. It lacks the falsifiability, the testable implications, the structure that makes a theory useful. * Think of the theory of computation: it’s broad, but it’s powerful precisely because it’s specific. It offers a universal claim that’s testable—you can try to find a single counterexample in physical reality to falsify it. That’s what makes it strong. * Compare that with “the gods did it.” That kind of claim offers no constraints, no predictions, and no way to falsify it. There’s no structure it implies that we can test. And that’s the danger in Moravec’s approach: his redefinitions move the argument away from structured explanation toward this consequence-free territory. * Thus, Moravec is *chopping off* explanation. He is redefining terms in a way that removes consequences—this is the opposite of what we should be doing. We want bold theories that are ripe for criticism. See ideas in: [Content](Content.md) ###### Explanatory Power Explanatory content means something that is _hard to vary_ and has _many logical, testable consequences_. Theories rich in explanatory power make bold predictions—ones that can be criticized, falsified, refined. In other words: * Theories with high explanatory content are *hard to vary* and have *high content* * High content isn't enough: Ptolemaic planetary theory had high content, but it was more descriptive and thus easy to vary (just add another epicycle) * Hard to vary isn't enough: Explanatory content is the combination of two things: 1. A theory that is hard to vary (each component of it plays a key role in explaining something) 2. The associated content—consequences—of the theory. In other words, explanatory content is content that is directly connected to an explanation (rather than just a mere description). maybe it is just consequences implied via an explanation ---