# AGI and Goal Posts One argument I often hear around AGI and Super-intelligence is as follows: > [!quote] > [Mr Witt](Mr%20Witt.md): Well what exactly is your definition of intelligence? > > Me: I think we are still trying to come up with a good definition of intelligence. This will continue to evolve as our *understanding* improves and we create better and better [Explanations](Explanations.md). > > [Mr Witt](Mr%20Witt.md): Ah, so you so don't have a specific concrete one. That is a shame, I was hoping to corner you. No matter, lets try this. What is something that current AI's cannot do that intelligence entities can do? > > Me: [Knowledge](Knowledge.md) creation. > > [Mr Witt](Mr%20Witt.md): Alright and can you define knowledge creation? > > Me: We only know of two ways to create knowledge. Explanatory knowledge is created via [Conjecture and Criticism](Conjecture%20and%20Criticism.md), while [Non-explanatory Knowledge](Non-explanatory%20Knowledge.md) is created via [Evolution](Evolution.md). > > [Mr Witt](Mr%20Witt.md): Hmm. That seems rather human centric. Shouldn't your definition be more broad to include ways that non human entities may create new knowledge? > > Me: The burden is not on me to come up with an entirely new theory of knowledge creation, or a definition that would inspire one. There is only one known was as it stands to create new knowledge—and this is explained in detail by Popper. > > [Mr Witt](Mr%20Witt.md): Here is the issue with your current stance though. You are saying that AI cannot create new knowledge and yet your definition is constrained to the way that humans create new knowledge. Perhaps they *can* create new knowledge, just via some different means. Would it not be more helpful to just measure the *output*? *Has* new knowledge been created, yes or no? This helps move us away from needing to understand the process, and allows us to just think about the output. > > Me: Well that just won't do. I could have a room full of monkey's banging on type writers and given enough time they'll produce some new bit of knowledge. The output would be misleading in that case. > > [Mr Witt](Mr%20Witt.md): Look I am asking *you* why AI cannot create new knowledge. You are being quite difficult. Help me out here. > > Me: I am trying but we just have different ways of viewing the world. But okay, let's try this. If an AI *did the following* I would agree that it is creating new knowledge. If it managed, unprompted—so no human is in the loop contaminating the experiment—to consistently generate new statements and ideas that were good explanations. These explanations would then need to be criticized in order to ensure that they actually were good explanations. Both the generation of the idea and it's criticism require creativity. But notice this creativity has all sorts of unique constraints. > > Let's try this. Consider a piece of *new knowledge*. What attributes does it have? Well it must solve some problem and tell us something about the world—it must map to reality in some way. But—and this is possibly the most important thing—it must relate to other existing knowledge in a special way. Either by meshing and not conflicting with it, or if there is a conflict then by it's intrinsic nature offering an explanation for why other pieces of knowledge are actually incorrect. > > Consider two different types of knowledge. The first type is more banal. It is where did I leave my keys this morning? I have a problem, I need to solve it. I conjecture it must be somewhere at home because I didn't leave the house, and I then guess that they are by the doorway. They aren't there, so I guess they were left on my desk, and so on. This does generate new knowledge, but we can quantify how *connected* it is to all other knowledge—it's connectivity is *low*. Put simply, knowledge is an interwoven web that you can't stumble upon by chance. The higher degree of connectivity (as in Darwins Evolution or Einsteins General Relativity) the more knowledge. > > [Mr Witt](Mr%20Witt.md): I grant you that, but I'm a boots on the ground type of guy. In practice, how would we measure that? > > Me: That is my point: *we do not know*. We have yet to come up with a good [Explanation](Explanations.md). At this point it is just of the form "you know it when you see it". We arrive at it via argument and criticism. Our minds are certainly doing something that could be programmed in a computer. However, we do not have a good explanation of this yet. > > [Mr Witt](Mr%20Witt.md): Alright here is what I am trying to prevent. I am expecting that, in a several years time, LLMs will be creating new knowledge and you will be sitting over there, *having moved the goal posts*, saying "well that isn't really knowledge creation". That is why I am trying to pin you down and get you state a falsifiable prediction. > > Me: I have just given you one. If LLMs and or any AI technology can consistently and reliably create [Explanatory Knowledge](Explanatory%20Knowledge.md) that has a high degree of connectivity to other knowledge and stands up to broad forms of criticism, then I would happily admit that new knowledge is being created. > > [Mr Witt](Mr%20Witt.md): Isn't this just a matter of degree and not kind? I'm expecting that you will say "well the knowledge being created isn't connected enough". > > Me: I claim that past a small degree of connectivity creating new knowledge would be intractable via a brute force approach. You won't randomly stumble upon it. > > [Mr Witt](Mr%20Witt.md): My argument remains: if current models create bits of knowledge today, or will in a few years, then if we *extrapolate* forward in time, wouldn't we expect them to eventually be creating deep, richly connected knowledge? > > Me: No, for we do now have a good explanation of why that would be the case. Consider for a moment two basic computational processes. One is linear and the other is quadratic. If we only see the first few steps of a quadratic, we may say "well this will likely scale linearly, correct?". But, as we observe more steps we see that is not the case. > > The same would apply if you wanted to travel to the moon. Objectively, traveling to the moon is just a matter of decreasing the distance between it and you. You can make a minor dent in that distance by climbing a top a large ladder. And you could argue, if I just keep doing this, [Surely](Surely%20Operator.md) I will arrive at the moon eventually. > > Of course this is a terrible argument. Fully closing the distance between the earth and you requires an entirely different set of knowledge and ideas. For one, you must figure out how to make it there alive. Even if you *did* build a ladder to the moon, you would need food, water and oxygen to sustain you on your journey. Obviously a ladder would never work for the earth is rotating on it's axis—connecting it to the moon would cause it to snap—although it would certainly snap far before that due to the centripetal forces due to earths rotation. I could go on, but the point is that this approach has fundamental limitations that we can explore via argument, via criticism. > > And that is my argument for why current AI will not scale to creating new, deeply connected knowledge. For our best theory of [Explanatory Knowledge](Explanatory%20Knowledge.md)—Poppers Theory—states that it is created via conjecture and refutation. Theory comes first, then observation. Induction does not create new knowledge. And current AI works entirely via induction. Without this, just like the Monkey's it will occasionally stumble upon trivially connected knowledge. But this is not different than climbing the first 20 rungs of the ladder and thinking you are well on your way to the moon. --- Date: 20250418 Links to: Tags: References: * [#77 (Bonus) - AI Doom Debate (w/ Liron Shapira) - YouTube](https://www.youtube.com/watch?v=MBEF6_ERk9I)