[ Written in 1960 for J. H. Woodger's seventieth birthday. In company with other such papers, it appeared in "Synthese" (Volume 15, 1963), and afterward in J. R. Gegg and F. T. C. Harris, eds., "Form and Strategy in Science" (Dodrecht, Holland: D. Reidel Publishing Co., 1964). Subsequently reprinted in Quine's own "The Ways of Paradox" (Cambridge, Massachusetts: Harvard University Press, 1966).]
It is not to be wondered that theory makers seek simplicity. When two theories are equally defensible on other counts, certainly the simpler of the two is to be preferred on the score of both beauty and convenience. But what is remarkable is that the simpler of the two theories is generally regarded not only as the more desirable but also as the more probable. If two theories conform equally to past observations, the simpler of the two is seen as standing the better chance of confirmation in future observations. Such is the maxim of the simplicity of nature. It seems to be implicitly assumed in every extrapolation and interpolation, every drawing of a smooth curve through plotted points. And the maxim of the uniformity of nature is of a piece with it, uniformity being a species of simplicity.
Simplicity is not easy to define. But it may be expected, whatever it is, to be relative to the texture of a conceptual scheme. If the basic concepts of one conceptual scheme are derivative concepts of another, and vice versa, presumably one of two hypothesis could count as simpler for the one scheme and the other for the other. This being so, how can simplicity carry any peculiar presumption of objective truth? Such is the implausibility of the maxim of the simplicity of nature.
Corresponding remarks apply directly to the maxim of the uniformity of nature, according to which, vaguely speaking, things similar in some respects tend to prove similar in others. For again similarity, whatever it is, would seem to be relative to the structure of ones conceptual scheme or quality space. Any two things, after all, are shared as members by as many classes as any other two things; degrees of similarity depend on which of those classes we weight as the more basic or natural.
Belief in the simplicity of nature, and hence in the uniformity of nature, can be partially accounted for in obvious ways. One plausible factor is wishful thinking. Another and more compelling cause of the belief is to be found in our perceptual mechanism: there is a subjective selectivity that makes us tend to see the simple and miss the complex. Thus consider streamers, as printers call them: vertical or diagonal white paths formed by a fortuitous lining up of the spaces between words. They are always straight or gently curved. The fastidious typesetter makes them vanish just by making them crooked.
This subjective selecitivity is not limited to the perceptual level. It can figure even in the most deliberate devising of experimental criteria. Thus suppose we try to map out the degrees of mutual affinity of stimuli for a dog, by a series of experiments in the conditioning and extinction of his responses. Suppose further that the resulting map is challenged: suppose someone protests that what the map reflects is not some original spacing of qualities in the dogs pre-experimental psyche or original fund of dispositions, but only a history of readjustments induced successively by the very experiments of the series. Now how would we rise to this challenge? Obviously, by repeating the experiments in a different order on another dog. If we get much the same map for the second dog despite the permutation, we have evidence that the map reflects a genuinely pre-experimental pattern of dispositions. And we then have evidence also of something more: that this pattern or quality space is the same for both dogs. By the very nature of our criterion, in this example, we get evidence either of uniformity or of nothing. An analysis of experimental criteria in other sciences would no doubt reveal many further examples of the same sort of experimentally imposed bias in favor of uniformity, or in favor of simplicity of other sorts.
This selective bias affords not only a partial explanation of belief in the maxim of the simplicity of nature but also, in an odd way, a partial justification. For, if our way of framing criteria is such as to preclude, frequently, any confirmation of the more complex of two rival hypotheses, then we may indeed fairly say that the simpler hypothesis stands the better chance of confirmation; and such, precisely, was the maxim of the simplicity of nature. We have, insofar, justified the maxim while still avoiding the paradox that seemed to be involved in trying to reconcile the relativity of simplicity with the absoluteness of truth.
This solution, however, is too partial to rest with. The selective bias in favor of simplicity, in our perceptual mechanism and in our deliberate experimental criteria, is significant but not overwhelming. Complex hypotheses do often stand as live options, just as susceptible to experimental confirmation as their simpler alternatives; and in such cases still the maxim of simplicity continues to be applied in scientific practice, with as much intuitive plausibility as in other cases. We fit the simplest possible curve to plotted points, thinking it the likeliest curve pending new points to the contrary; we encompass data with a hypothesis involving the fewest possible parameters, thinking this hypothesis the likeliest pending new data to the contrary; and we even record a measurement as the roundest near number, pending repeated measurements to the contrary.
Now this last case, the round number, throws further light on our problem. If a measured quantity is reported first as 5.21, say, and more accurately in the light of further measurement as 5.23, the new reading supersedes the old; but if it is reported first as 5.2 and later as 5.23, the new reading may well be looked upon as confirming the old one and merely supplying some further information regarding the detail of further decimal places. Thus the simpler hypothesis, 5.2 as against 5.21, is quite genuinely ten times likelier to be confirmed, just because ten times as much deviation is tolerated under the head of confirmation.
True, we do not customarily say simple hypothesis in the round-number case. We invoke here no maxim of the simplicity of nature, but only a canon of eschewing insignificant digits. Yet the same underlying principle that operates here can be detected also in cases where one does talk of simplicity of hypotheses. If we encompass a set of data with a hypothesis involving the fewest possible parameters, and then are constrained by further experiment to add another parameter, we are likely to view the emendation not as a refutation of the first result but as a confirmation plus a refinement; but if we have an extra parameter in the first hypothesis and are constrained by further experiment to alter it, we view the emendation as a refutation and revision. Here again the simpler hypothesis, the one with fewer parameters, is initially the more probable simply because a wider range of possible subsequent findings is classified as favorable to it. The case of the simplest curve through plotted points is similar: an emendation prompted by subsequent findings is the likelier to be viewed as confirmation-cum-refinement, rather than as refutation and revision, the simpler the curve.
We have noticed four causes for supposing that the simpler hypothesis stands the better chance of confirmation. There is wishful thinking. There is a perceptual bias that slants the data in favor of simple patterns. There is a bias in the experimental criteria of concepts, whereby the simpler of two hypotheses is sometimes opened to confirmation while its alternative is left inaccessible. And finally there is a preferential system of score-keeping, which tolerates wider deviations the simpler the hypothesis. These last two of the four causes operate far more widely, I suspect, than appears on the surface. Do they operate widely enough to account in full for the crucial role that simplicity plays in scientific method?