# Argument Is How We Achieve Justification Philosophers yearn for a [Justification](Justification.md) of why we can rely on our theories. In a sense, [Justification Is Seeking A Foundation](Justification%20Is%20Seeking%20A%20Foundation.md). So how do we justify our theories and [Explanations](Explanations.md)? ###### Logic Can't Save Us There is no logically necessary connection between [Truth](Truth.md) and explanatory power. Even the best and truest available theory may make a false prediction in particular cases, and that may be when we need the theory most. Even a bad explanation (such as [Solipsism](Solipsism.md)) may be true. No valid form of reasoning can logically rule out such possibilities (or prove them unlikely). While this may appear unsettling, it does not just apply to [Explanation](Explanations.md). For instance, the laws of [Logical](Logic.md) [Deduction](Deduction.md) themselves could be false. Any attempt to [Justify](Justification.md) the laws of deduction logically must lead either to [Circularity](Circular%20Argument.md) or to an [Infinite Regress](Infinite%20Regress.md). But then what exactly does justify our reliance on the laws of deduction? [The Laws of Deduction Are Justified Because No Explanation Is Improved By Replacing a Law of Deduction](The%20Laws%20of%20Deduction%20Are%20Justified%20Because%20No%20Explanation%20Is%20Improved%20By%20Replacing%20a%20Law%20of%20Deduction.md). This may not seem to be a very secure foundation for pure [Logic](Logic.md). And it is not perfectly secure! We are [Fallible](Fallibilism.md). And we should not expect it to be perfectly secure, for logical reasoning is no less a physical process than scientific reasoning is, and it is inherently fallible. The laws of logic are not self-evident. There are people, the mathematical ‘intuitionists’, who disagree with the conventional laws of deduction (the logical ‘rules of inference’). They cannot be proved wrong, but we can convincingly *argue* that they are wrong. The laws of logical deduction are in fact an [Explanation](Explanations.md). They solve [Problems](Problem.md), and they do so better than any rival theory that has been proposed. ###### Argument Is All We Have How exactly did we manage to justify the laws of deduction? We specifically said "they are justified because no explanation is improved by replacing a law of deduction". That sentence is [doing a lot of work](Word%20doing%20a%20lot%20of%20work.md) and is worth unpacking. We have a theory, our laws of deduction, and historically we can consider any rival theories that have been proposed. An [Argument](Argument.md) is constructed showing that the *no explanation is improved* by removing, updating, replacing, or discarding the laws of deduction. That is to say the explanation of the laws of deduction themselves are not improved, and no other external explanations are improved either. It solves no [Problems](Problem.md). Based on the [Problem Solving Process](Problem%20Solving%20Process.md), [Explanations](Explanations.md) are meant to solve [Problems](Problem.md). [Explanations Are Justified By Their Superior Ability to Solve Problems They Address](Explanations%20Are%20Justified%20By%20Their%20Superior%20Ability%20to%20Solve%20Problems%20They%20Address.md). According to Popperian methodology, we should rely on the *best-corroborated* theory—that is, the one subjected to the most stringent tests and criticisms that survived them, while it's *rivals* have been refuted. In these cases it would be *justified* to rely on the theory. In other words, the theory that was best-corroborated during the course of rational [Argument](Argument.md) is the one we should rely on. [Argument](Argument.md) is all we have. [Argument](Argument.md) is not based on anything or justified by anything. It's purpose is to solve [Problems](Problem.md)—to show that a given problem is solved by a given [Explanation](Explanations.md). ###### Explanations Have Structure and Imply Un-severable Consequences Given that we have a best-corroborated theory that has been argued for extensively, surviving all criticisms while it's rivals have been refuted, why exactly should we rely on it's predictions about the *future*? For this we can appeal to the [Principles of Rationality](Principles%20of%20Rationality.md). There is a [Logical Structure](Logical%20Structure.md) associated with argument and with explanation. If the explanation says something about the future, we cannot simply *discard* or *sever* that assertion—to do so would be to *create a worse explanation*! Of course, if by argument it turns out that this is actually a *better* explanation then there is no problem. But in general, given a *good* explanation, it will be made *worse* by trying to alter it in an ad hoc way. It will *create more problems* and *solve no new problems*. In general it will spoil the existing explanation. The best example of this is seen in [The Theory Of Gravity Holds Except When I Jump From the Eiffel Tower](The%20Theory%20Of%20Gravity%20Holds%20Except%20When%20I%20Jump%20From%20the%20Eiffel%20Tower.md). Explanations have an intrinsic structure and are linked to other explanations via an extrinsic structure. [Explanations Imply Consequences](Explanations%20Imply%20Consequences.md). If there is no argument in favor of a postulate, then it is unreliable. Thus, it is an irrational argument to say "the prevailing explanation holds now, and it has held in the past, but it's future claims are not to be trusted". This constitutes an arbitrary, unexplained postulate. Again, if there is an *argument* and an *explanation* of *why* the prevailing explanation will not hold in the future, and this solves a [Problem](Problem.md), we may actually make [Progress](Progress.md)! But, you may ask: how exactly does one justify the [Principles of Rationality](Principles%20of%20Rationality.md)? As always, by [Argument](Argument.md)! --- # Deep Dive Let's now dig in a little further... # What can't we do? There is no logically necessary connection between [Truth](Truth.md) and explanatory power. A bad explanation (such as [Solipsism](Solipsism.md)) may be true. Even the best and truest available theory may make a false prediction in particular cases, and that may be when we need the theory most. No valid form of reasoning can logically rule out such possibilities (or prove them unlikely). We also can't justify a theory based on the "evidence". The evidence—all experiments whose outcomes the theory correctly predicted in the past—is consistent with an infinite number of theories. # What can we do? ###### We Rely On The Best Corroborated Theory According to Popperian methodology, we should rely on the *best-corroborated* theory (the one subject to the most stringent tests and has survived them, while it's *rivals* have been refuted). In these cases it would be *justified* to rely on the theory. The process of corroboration has justified the theory, in the sense that it's predictions are more likely to be true than the predictions of *rival* theories. All we can ever say is that a given theory is more likely to be true than the actual rivals that have been proposed. ###### Choosing the Best-Corroborated Theory is Justified By Argument Choosing the best-corroborated theory is justified by [Argument](Argument.md). This justification is tentative, of course. We are [Fallible](Fallibilism.md) and all theories are subject to [Error](Error.md). When dealing with empirical theories, a good argument will be combined with some evidence—crucial experiments play a pivotal role in deciding between a theory and it's rivals. The rivals were refuted, the theory survived. It is this fact—that the actual outcomes of experiment refuted all rival theories and corroborated the prevailing theory—that justifies relying on the prevailing theory. ###### Reliability Is Not Absolute, But Relative To Rival Theories We can see that the process *always* is making use of *rival* theories. We are not concerned with all *logically possible theories* (of which there are infinitely many), but those rival theories that are proposed during a rational controversy. This may seem strange, that the reliability of a theory depends on the *accident* of what *other* theories—false theories—have been proposed. One may hope that a theories validity depends only on it's own *content* and the experimental evidence. But this is not that strange after all. Our knowledge is always *conditioned* on what we know at some point in time. Consider the classic view of induction. It spoke of theories being reliable or not *given certain available evidence*. In that case, the theory's reliability is *conditional* on the evidence that is available. We have updated that view to reflect that a theories reliability is conditional on a given [Problem Situation](Problem%20Situation.md). In the Popperian picture of scientific progress, it is not observations but problems, controversies, theories and criticism that are primary. Experiments are designed and performed only to resolve controversies. Therefore, only experimental results that actually do refute a genuine rival theory constitute corroboration. The ‘reliability’ that corroboration confers is *not absolute* but only *relative* to the other contending theories. We expect the strategy of relying on corroborated theories to select the best theories from those that are proposed. That is a sufficient basis for action. We do not need (and could not validly get) any assurance about how good even the best proposed course of action will be. We may always be mistaken, but so what? We cannot use theories that have yet to be proposed; nor can we correct errors that we cannot yet see. ###### We Rely On The Best-Corroborated Theory Because It Is The Only Rationally Tenable Theory Available We rely on the best-corroborated theory in the future because is it the only *rationally tenable* theory available. This is because all of it's rivals have been refuted. Imagine we have two rival theories, $A$ and $B$. In an experiment we show that $B$ makes a false prediction and is thus refuted. This is our justification for our reliance on $A$. Note however that the refutation of $B$ is not a *logically relevant criticism* about it's performance in the future—it *could* make perfect predictions in all situations outside of that which it made an error! The issue is that this is not an issue of *logic*. Sure, $B$ *could* make perfect predictions in the future, and $A$ could make false predictions. At that point a new [Problem](Problem.md) would be created and we would need to return to [Conjecture and Criticism](Conjecture%20and%20Criticism.md) in order to arrive at a better theory. But we are concerned with what theory to rely on *now*. And as of this moment, $B$ has been refuted and $A$ has not—thus we are right to rely on $A$. Of course, someone could propose a variant $B'$ that is not refuted. This is one way in which we can make progress! Let's extend our example: supposed that $A$ and $B$ make different predictions about some future, yet to be observed event. We have already refuted $B$ based on some experiment we *have actually run*. $A$ has not been refuted. But what about this situation should make us rely on $A$ and not $B$ for this entirely different future prediction? It is simply that $A$ has not been shown to be false yet! Based on the structure and implications of $A$, it is consistent with all observations thus far—$B$ is not. But supposed that you have an argument for *why* $B$ is actually the right theory to rely on for this future prediction. This is could lead to a *problem*! It could be that $B$ needs to be updated in order to correctly account for the past event. Or that $A$ needs to be updated. Or we need a new theory $C$ that is some combination of $A$ and $B$. However, it need not lead to a problem—it is dependent on whether the explanation of *why* we should rely on $B$ (or a variant, $B'$) for a future prediction is a good explanation or not. For instance, it could be a *poor* explanation such as: $B' = \text{use }A \text{ for past events and } B \text{ for future events} $ But this just creates new problems and spoils $A$. Why is there this discontinuity? Why would $A$ not hold in the future? Why is $B$ suddenly preferable to $A$ in the future, given it made false predictions in the past? And so on. The theme here is that when reasoning in this way you often return to a variant of "well *couldn't* the such-and-such *technically* be true?" It is this vagueness that hurts our ability to reason. You must always ask: what is such-and-such? What problem does it solve? What problems does it create? Is it an entirely *new* theory at that point? In that case, we now are dealing with a different rival altogether! The reason it can feel insecure to rely on the best-corroborated theory at any point in time is because your creative mind will naturally start conjecturing new variants and rivals, as well as counterfactual scenarios under which one may be preferable. But this has naturally just updated out problem situation! We now have additional rivals we must refute! However, notice that this problem is far less "in your face" when dealing with an incredibly good explanation, such as that provided by [General Relativity](General%20Relativity.md) about gravity. In this case it is very challenging to come up with a good rival theory. This is because general relativity is a great explanation that is deeply connected to many other explanations—varying it is likely to make it worse and finding a rival is challenging due to all of the constraints it must satisfy. ###### Logical Consistency Is Insufficient: The Best-Corroborated Theory It Must Be A Good Explanation Let's move to a new thought experiment. Let us have a theory $A$, that is the prevailing theory of gravity, and a theory $B$ that is a "rival". It is identical to $A$, but states that if I were to jump from a tall building I would float. I have never jumped from a tall building, so all past tests of $A$ are tests of $B$. Thus both $A$ and $B$ have been equally corroborated. How could rival $B$ be untenable? The answer is that it is a terrible explanation! It has taken $A$ and just added one *unexplained qualification*. This qualification is in effect a new theory, but there is *no argument* against $A$ as it relates to my gravitational properties, or in favor of a new theory about them. This postulate has been subjected to *no criticism* and *no experimental testing*. It does *not solve a problem* and it doesn't suggest any interesting problem that it could solve. Worst of all, this qualification *explains nothing*, *spoils the existing explanation* of gravity! It is *this explanation* that justifies our relying on $A$ and not $B$. ###### Theories Postulating Anomalies Without Explaining Them Are Less Likely To Make True Predictions We are not saying that it is a [Principle of Rationality](Principles%20of%20Rationality.md) that a theory which asserts the exist of an objective, physical anomaly, is less likely to make true predictions than one that doesn't. What we are saying is that *theories postulating anomalies without explaining them are less likely than their rivals to make true predictions*. More generally, it is a principle of rationality that theories are postulated in order to solve problems. Therefore any postulate which solves no problem is to be rejected. That is because a good explanation qualified by such a postulate becomes a bad explanation. It is in this way that there really is an *objective difference* between theories that make unexplained predictions and those than don't. Put simply: if there is *no argument* in favor of a postulate, then it is not reliable. ###### The Structure Of Explanations Consider theory $B$ again. Is the postulate about me floating just superfluous, or is it positively bad? Put another way, can we just ignore it and effectively get back to theory $A$, or does it *break* theory $A$ altogether? To answer that we must consider the implications, or consequences, of this postulate. The postulate states that I would float, unsupported. ‘Unsupported’ means ‘without any upward force acting’ on me, so the suggestion is that I would be immune to the ‘force’ of gravity which would otherwise pull me down. But according to the general theory of relativity, gravity is not a force but a manifestation of the curvature of spacetime. This curvature explains why unsupported objects, like myself and the Earth, move closer together with time. Therefore, in the light of modern physics your theory is presumably saying that there is an upward force on me, as required to hold me at a constant distance from the Earth. But where does that force come from, and how does it behave? For example, what is a ‘constant distance’? If the Earth were to move downwards, would I respond instantaneously to maintain the same height (which would allow communication faster than the speed of light, contrary to another principle of relativity), or would the information about where the Earth is have to reach me at the speed of light first? If so, what carries this information? Is it a new sort of wave emitted by the Earth — in which case what equations does it obey? Does it carry energy? What is its quantum-mechanical behavior? Or is it that I respond in a special way to existing waves, such as light? In that case, would the anomaly disappear if an opaque barrier were placed between me and the Earth? Isn’t the Earth mostly opaque anyway? Where does ‘the Earth’ begin: what defines the surface above which I am supposed to ‘float’? For that matter, what defines where I begin? If I hold on to a heavy weight, does it float too? If so, then the aircraft in which I have flown could have switched off their engines without mishap. What counts as ‘holding on’? Would the aircraft then drop if I let go of the arm rest? And if the effect does not apply to things I am holding on to, what about my clothes? Will they weigh me down and cause me to be killed after all, if I jump over the railing? What about my last meal? We could go on like this ad infinitum. The more we consider the implications of the postulate, the more unanswered questions we find. This theory is not just incomplete. The postulate has created fresh problems by *spoiling* satisfactory explanations of other phenomena! Thus the additional postulate is not just superfluous, it is *positively bad*. In general, perverse but unrefuted theories which one can propose off the cuff fall roughly into two categories. There are theories that postulate *unobservable entities*, such as particles that do not interact with any other matter. They can be *rejected for solving nothing* (‘Occam’s razor’, if you like). And there are theories, like $B$, that predict *unexplained observable anomalies*. They can be *rejected for solving nothing* and *spoiling existing solutions*. It is not that they conflict with existing observations. It is that they remove the explanatory power from existing theories by asserting that the predictions of those theories have exceptions, but not explaining how. This is worth pausing on. In [Logic](Logic.md), [Deduction](Deduction.md) has [Logical Consequences](Logical%20Consequence.md). I have also talked about how [Explanations Imply Consequences](Explanations%20Imply%20Consequences.md). The mental model I have here is that both deduction and explanation have a structure through which truth can *flow* or *move around*. Part of the structure that an explanation has *links it to other explanations*. By adding an additional postulate (I will float) to an existing explanation, that has *consequences* that flow to many different places. In our example, a consequence is that we have worsened our explanation of gravity (of which we are actively dealing with), but we also have worsened our explanations of fields, waves, the speed limit imposed by light, and so on. These are *consequences* of the new rival explanation. And it is these consequences that clearly make it *worse* than the prevailing theory. ###### Explanations Imply Consequences; Sometimes These Are About The Future Let us consider the components of our argument one more time. We have our [Problem Situation](Problem%20Situation.md) (a theory and it's existing rivals). We have the [Principles of Rationality](Principles%20of%20Rationality.md). And past observations that were used to refute rival theories and corroborate the prevailing theory. Where exactly is the justification of future predictions hiding? Is there a [Logical Gap](Logical%20Gap.md)? There is no logical gap and the components of our argument *do* include assertions about the future. The best existing theories, which cannot be abandoned lightly because they are the solutions of problems, contain predictions about the future. And *these predictions cannot be severed from the theories’ other content*, as you tried to do, because that would spoil the theories’ explanatory power. > It is this structure—the tendrils of an explanation reaching out—that say something about the future. We cannot discard them. The word "severed" was chosen carefully. This is referring to a strong, clearly connected structure that you have no control over! It is implied by your theories, just as consequences are implied in classic logical deduction. You cannot arbitrarily disconnect from this structure. Note that this is similar to the consequences implied in the case of [Reach](Reach.md). So we have *no universal principle of reasoning* which says that the future will resemble the past, but *we do have actual theories which say that*. In other words we have *specific theories*, our best explanations, that may say that the future will resemble the past. It is these specific theories that we are right to rely on. ###### Argument Justifies The Principles Of Rationality We have seen that future predictions can be justified by appeal to the [Principles of Rationality](app://obsidian.md/Principles%20of%20Rationality.md). But what justifies those? They are not, after all, truths of pure logic. So there are two possibilities: either they are unjustified, in which case conclusions drawn from them are unjustified too; or they are justified by some as yet unknown means. In either case there is a missing justification. I no longer suspect that this is the problem of induction in disguise. Nevertheless, having exploded the problem of induction, have we not revealed another fundamental problem, also concerning missing justification, beneath? What justifies the principles of rationality is [Argument](Argument.md), as usual. As an extreme example, what justifies our relying on the laws of deduction, despite the fact that any attempt to justify them logically must lead either to [Circularity](Circular%20Argument.md) or to an [Infinite Regress](Infinite%20Regress.md)? **They are justified because no explanation is improved by replacing a law of deduction**. This may not seem to be a very secure foundation for pure [Logic](Logic.md). And it is not perfectly secure! We are [Fallible](Fallibilism.md). And we should not expect it to be perfectly secure, for logical reasoning is no less a physical process than scientific reasoning is, and it is inherently fallible. The laws of logic are not self-evident. There are people, the mathematical ‘intuitionists’, who disagree with the conventional laws of deduction (the logical ‘rules of inference’). They cannot be proved wrong, but we can convincingly *argue* that they are wrong. ###### Middle Out There is a misconception that is present throughout this entire note (and [7 - A Conversation About Justification](7%20-%20A%20Conversation%20About%20Justification.md). The misconception is about the very nature of [Argument](Argument.md) and [Explanation](Explanations.md). We were assuming that arguments and explanations, such as those that justify acting on a particular theory, have the form of mathematical proofs, proceeding from assumptions to conclusions. You look for the ‘raw material’ (axioms) from which our conclusions (theorems) are derived. Now, there is indeed a logical structure of this type associated with every successful argument or explanation. But the process of argument does not begin with the ‘axioms’ and end with the ‘conclusion’. Rather, it starts *in the middle*, with a version that is riddled with inconsistencies, gaps, ambiguities and irrelevancies. All these faults are criticized. Attempts are made to replace faulty theories. The theories that are criticized and replaced usually include some of the ‘axioms’. That is why it is a mistake to assume that an argument *begins with*, or is justified by, the theories that eventually serve as its ‘axioms’. The argument ends — tentatively — when it seems to have shown that the associated explanation is satisfactory. The ‘axioms’ adopted are not ultimate, unchallengeable beliefs. They are tentative, explanatory theories. Argument is not the same species of thing as deduction, or the non-existent induction. It is not based on anything or justified by anything. And it doesn’t have to be, because its purpose is to solve problems—to show that a given problem is solved by a given explanation. --- Date: 20250404 Links to: [7 - A Conversation About Justification](7%20-%20A%20Conversation%20About%20Justification.md) Tags: References: * []()