# We Are Missing Explanation In The Search For Artificial General Intelligence
The problem is the vast majority of people (many who should know better) don't have a good grounding in how we create knowledge and move closer to truth.
We do through conjecturing some idea and then trying to refute it. Say we have a theory `A`, where `A` is "this new model is AGI". Great, now anyone can come up with a criticism trying to argue against that. I can say "well if `A` is true then it should be able to pass this test".
If `A` doesn't pass my test then we can say that `A` was falsified.
Each test that `A` passes (or more generally, each criticism that `A` survives) *corroborates* `A`. But it does not *prove* it.
In order to tentatively hold a theory it must have survived falsification and criticism. So when someone says "yeah, I just looked at this thing for 3 minutes and I've gotta say I think it's legit", that means legitimately nothing.
Some people are argue back here "but with this approach we'll never accept that *anything* is AGI. You'll always have some new criticism and keep moving the goal posts".
But that just is not true. An example will help here. Consider non-universal computation. We had the [difference engine](https://en.wikipedia.org/wiki/Difference_engine), slide rules, the abacus—all of which could only perform a small subclass of possible computations. Imagine asking someone at that point in time "do you think there is one machine that is universal—meaning that it can perform the computation that any other machine can perform?"
At first one might say "How could we know? We could create such a machine, but due to finite constraints on time we wouldn't be able to empirically show that it has an infinite repertoire? I could show 1 billion examples of computations it performed, but one might still argue that there exists some class outside of its reach."
But what Turing showed was that such a machine does exists, namely the [Universal Turing Machine](https://en.wikipedia.org/wiki/Universal_Turing_machine). He provided a *proof* of this, but more broadly he provided an incredibly deep *explanation*. This required improving the understanding of *what computation was*.
The problem with arguments around AGI are really two fold: we don't understand key parts of human intelligence and we don't understand parts of these large models. Without understanding, it is hard to show that you have arrived at truth. You are always going to be stuck in the land of "well I've seen 1,000 swans and they've all been white. So there are no black swans". You are trying to justify your theories via empirical confirmation—*not by explanation*.
A counter argument I'd expect at this point is "slow down buddy, surely we understand these large models. They have architecture X and are trained by some procedure P and then have an reinforcement learning procedure R tacked on". Agreed, we understand them *at that level of abstraction*. But we don't understand them at other critical levels of abstraction. Just as we know that the brain works by passing around electrochemical signals, that doesn't tell us just *how* exactly did those signals lead to Einsteins theory of General Relativity?
Let's use another fun example to make this even more concrete. Imagine walking through Parliament Square in London and being asked "why is that copper atom on the tip of Winston Churchill's nose?"
A low level description may reference the state of the solar system at some previous point in time, how it evolved according to dynamical laws, the trajectory that the copper atom took from the mine, through the smelter and the sculptors studio, and so on. This description would need to refer to atoms all over the planet as well, engaged in the complex motion we referred to as WWII. But—and here is the point—even if this were possible, after hearing all the details you would still not be able to say "ah yes I now understand why that atom is there".
The actual explanation you were looking for was along the lines of: the atom is there because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honor such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, and so on.
Back to our AI example—our current level of understanding is analogous to the lower level description of how the copper atom got to be on the tip of Churchill's nose. We need to move to more appropriate levels of abstraction and explanation.
Closing thoughts: we make progress and increase our knowledge by seeking out and improving explanations. Anyone appealing to an argument of the form "but it *feels like* AGI is here", is making the same mistake that The Inquisition made in 1633 when it argued against Galileo: "but it does not *feel* as if the earth is moving".
Eppur si muove (and yet it moves).
(To be fair, there are totally people who are trying to improve explanations of this systems and build our understanding—two examples being the MechInterp community and ARC. I'm mainly arguing against the masses)
---
Date: 20250417
Links to:
Tags:
References:
* []()