I submitted the following paper as my thesis for a Bachelor of Arts degree in Philosophy at the University of Iceland, May 2006.


Simplicity as a Theoretical Virtue

Table of Contents

Summary
I.Introduction
II.The Problems of Simplicity
III.A Criterion for Theory Choice
IV.Justification
V.Empirical Arguments for Simplicity
VI.Simplicity as an a priori principle
VII.Simplicity as Content
VIII.Sober on Simplicity
IX.Quine on the Maxim of the Simplicity of Nature
X.Simplicity Relative to Cognition
XI.Conclusion
References

Summary

Simplicity seems to play a key role in how many scientists and philosophers devise and defend their theories or hypotheses. In this paper I examine the grounds for regarding simplicity as a theoretical virtue, comparable to other established theoretical virtues such as powers of explanation, predictive powers, and conformity with empirical evidence and related theories. I will also consider the claim that, other things being equal, we should choose the simplest hypothesis when presented with competing hypotheses accounting equally well for the data -- i.e. that a criterion for theory choice based on simplicity considerations is warranted.







I. Introduction

It is suitably ironic that the concept of simplicity has proven a complex subject matter for philosophers. Both the meaning of simplicity and its relevance to scientific theories or explanatory hypotheses is an old topic of dispute. When William of Ockham first put forward his now famous razor, "Entities should not be multiplied beyond necessity"1 , in the 14th century, his contemporary Walter of Chatton took exception to it and came up with his own anti-razor: "If three things are not enough to verify an affirmative proposition about things, a fourth must be added, and so on"2. Four hundred years later, Kant cautioned that 'the variety of entities should not be rashly diminished'3. But while there may very well be a kernel of truth in their warning that we should not be overly stringent in our postulations, the fact remains that many philosophers and scientists attribute a key role to simplicity in the evaluation of scientific theories. The role of simplicity may even extend to patterns of reasoning outside of science, as the methods used in science are often said to reflect the methods of reasoning employed by people in day-to-day life4.

The concept of theoretical simplicity has not been defined in a such a way as to command broad consensus. There is, however, within the body of philosophical literature, a common distinction made between two facets of simplicity: ontological parsimony, the amount and complexity of things postulated, and syntactic elegance, measured by the number and complexity of the hypotheses involved. While the principle of Ockham's razor has been formulated in a number of different ways, often in such a way as to be applicable to both5, it is generally used only to refer to the former, as the 'principle of parsimony'. I shall be concerned with both these facets of simplicity, and how they relate to our choice of theories. In this paper I will examine the grounds for regarding simplicity as a theoretical virtue, comparable to other established theoretical virtues such as powers of explanation, predictive powers, conformity with empirical evidence and fitness into a framework of related theories. I will also consider the claim that, other things being equal, we should choose the simplest theory when presented with competing theories that account equally well for the data.

II. The Problems of Simplicity

Sober (Sober 2001) outlines the philosophical problems of simplicity as divided into three distinct tasks: How simplicity is to be measured, how it can be justified and how it should be traded-off, i.e. weighed against other theoretical virtues. I consider this a fair assessment of the tasks involved. In this paper I shall primarily be concerned with whether or how theoretical simplicity may be justified. However, it will hardly do to proceed without a clear notion of what is meant by 'theoretical simplicity'. For my purposes, I will largely make use of Swinburne's definition (Swinburne 1997): When we evaluate the simplicity of a given theory or hypothesis, we should consider four separate aspects.

First, there is the number of entities postulated by the theory. If we have two theories identical in all respects, with the sole exception that one theory postulates the existence of a greater number of entities than the other, we shall say that the latter is simpler. As an example, if one theory postulates some causes, A and B, to explain an observed effect E, while a second theory postulates A as causing E but makes no mention of B, the latter theory is simpler in this respect. The maxim of Ockham's Razor, or the 'principle of parsimony', consists in systematically preferring the theory with the least number of entities, ceteris paribus.

Secondly, there is the number of different kinds of entities, or properties of entities. A theory, for example, which postulates two kinds of substances (e.g. Cartesian dualism) is less simple than a corresponding theory which only postulates a single kind of substance (e.g. monism).

Thirdly, if a certain formulation of a theory contains a term or describes a property which is only comprehensible in light of some other term or property, we shall say that the term or property to which the latter can be reduced is simpler than the first. A classic example of this is the artificially constructed property 'grue', first put forth by Nelson Goodman6: Something is 'grue' if and only if it is observed as green before a particular time T, or else observed as blue after time T. According to our definition, the fictitious property 'grue' is more complex than the property 'green', since it is possible to comprehend the meaning of 'green' without comprehending the meaning of 'grue', but not vice-versa.

Finally, as our fourth aspect, we shall say that (other things being equal) a theory consisting of few laws and few relations of variables is simpler than a theory with many laws and many relations of variables. Thus, simpler theories require fewer symbols in order to be expressed, i.e. they depend on a smaller vocabulary. Exactly how we evaluate complexity in this respect is not, I believe, of paramount importance to our discussion. However, to make this point clear we shall say that a law postulating a certain relationship between some variables x, y and z is less simple than one postulating a relationship only between x and y. This results in the relationship y = 2x being simpler than y = zx, and so forth.

Roughly speaking, we can say that the first two aspects of simplicity correspond to ontological parsimony, while the last two correspond to syntactic elegance. With respect to the aforementioned facets of simplicity, we shall maintain that a given theory A is simpler than another theory B if and only if the simplest formulation of A is simpler than the simplest formulation of B. There will, of course, be cases where there are trade-offs between the different facets -- e.g. cases where increased ontological parsimony results in reduced syntactic elegance, or vice versa. I do not intend to put forward a formula for calculating overall theoretical simplicity for comparison when such conflicts arise, nor do I believe that it is possible to do so in such a way as to garner general satisfaction. Without a fixed method of measurement, we will inevitably run into the sort of vagueness that plagues moral utilitarianism. If we compare, as an example, Kepler's theory of elliptical planetary motion with Copernicus' theory of circular planetary motion, we see that the latter has laws covering forty different motions, while the first has much fewer laws. However, there are more terms in equations describing elliptical motions, which might be taken as indicating that Kepler's theory is more complex. In this case, we have to weigh one aspect of simplicity (the number of laws) with another aspect (the number of symbols). In retrospect, it is easy to declare Kepler's theory the simpler of the two, but this may very well have been far from obvious at the time when both theories were living options.

Despite difficult cases such as these, there is nevertheless bound to be a wide-ranging consensus on how to weigh one aspect against another, as is the case with moral utility. And even if the balance is such that our definition of simplicity cannot help us choose between two identical theories differing only in their facets of simplicity, it may still help us by eliminating theories that are obviously more complex than others. I shall dwell further on this at a later point.

III. A Criterion for Theory Choice

Assuming that our wide definition of simplicity is acceptable, there remains the question of how to formulate it as a useful decision-making criterion. We shall do so in the following fashion:

Let us suppose that we have two rival hypotheses, both of which are more or less equally satisfactory in terms of powers of explanation, predictive powers, conformity with empirical data and all other discernible theoretical virtues. A criterion which instructs us, in such cases, to systematically prefer the simpler hypothesis (in the sense outlined previously) we shall call a criterion of simplicity.

Our criterion of simplicity only applies to situations where other relevant factors are considered more or less equally satisfactory. But in hotly disputed cases where two theories vie for our approval, the adherents of one theory will typically argue that the other theory does not explain the data adequately. Take the case of the dualism-monism debate. While the criterion of simplicity would dictate that we go with monism rather than dualism due to its greater paucity of postulations, its persuasive power is to a large extent derived from the assumption that the two theories are more or less equally satisfactory in other respects. The monist might argue that the dualist's view involves needless and unpleasant complexities, but the dualist would simply retort that the monist fails to account for all the facts in a satisfactory way, e.g. that the monist's theory does not account for the nonphysical items of experience of which the dualist claims to be aware. In light of this, an appeal to simplicity considerations is unlikely to sway him7. It is therefore necessary to stress that a criterion of simplicity will only exert its force when the facts that need explaining are agreed upon by the parties involved.

IV. Justification

How can we justify a criterion of simplicity? Before we proceed, we must consider what scope we wish to give the criterion. That which makes simplicity considerations relevant in one context may very well have nothing to do with what makes them relevant in another, although we call it 'simplicity' in both cases. Thus, while we may find an acceptable justification for a criterion of simplicity in some given set of circumstances, this justification need not apply in other, different circumstances. If we seek a global justification for our criterion (i.e. one which justifies general application irrespective of particular circumstances) we must demonstrate that it applies to all cases of theory choice, whereas a local justifaction will only justify the application of the criterion under a specific set of circumstances. Quine (Quine 1966), and more recently, Sober (Sober 1994), have expressed reservations about the possibility of justifying a global criterion of simplicity. I shall discuss their arguments at a later point. Meanwhile, any reference to potential justifications for the criterion of simplicity should be understood as referring to global justifications.

In the past, the justifications for simplicity considerations were often rooted in theological arguments. A divine creator was expected to have created a beautiful (and therefore, on some accounts, simple) universe. Aquinas observed that 'nature does not employ two instruments where one suffices'8. But with foundations of this kind, a criterion of simplicity is only rationally justified insofar as the existence of a divine creator of a certain disposition can be rationally justified. Understandably, both scientists and philosophers are therefore reluctant to ground methodological principles in religious beliefs.

Most contemporary justifications for including simplicity considerations in theory choice have come in other forms, ranging from purely methodological justifications to epistemic ones. Methodological justifications are typically of a pragmatic or aesthetic nature. Simple theories are to be preferred because they are easier to apply, and because they are more agreeable to the intellect. An aesthetic preference for simplicity appears to be widespread amongst scientists9, and simplicity may in many cases be conducive to intellectual satisfaction. Likewise, a theory which is easier to comprehend and apply carries obvious practical benefits. Given two competing theories differing only in simplicity, it is therefore easy to understand why we might prefer the simpler, for purely pragmatic and aesthetic reasons. This could give us grounds for regarding simplicity as a theoretical virtue. However, we may wish to ground the preference for simplicity in something more solid than aesthetic judgement and practical convenience, for these justifications result in an uncomfortable degree of arbitrariness. In the words of Peirce, "[...] it makes of inquiry something similar to the development of taste"10. A theory which is easy for one inquirer to apply may be difficult for another, and in a similar way, that which is agreeable to one intellect may be disagreeable to another. And even if a broad concensus exists in these matters amongst the community of inquirers, it only leads to pragmatic intersubjectivity, while in no way characterising simplicity as an objective, universal theoretical virtue.

Epistemic justifications, on the other hand, seek to show that it is rational to prefer simpler theories, because they are more likely to be true. While it is psychologically and pragmatically understandable that we should prefer a simple theory to a complex one, a much more difficult problem arises when we look for epistemic reasons. Epistemic justifications, should they withstand our scrutiny, would provide firmer foundation for our criterion. Is there any reason whatsoever to suppose that, other things being equal, the simpler theory is more likely to be true? There are a number of different answers to this question.

V. Empirical Arguments for Simplicity

Appealing to induction is one way of epistemically justifying simplicity considerations in our choice of hypotheses. We could argue that using the criterion of simplicity has worked well in the past, and that it is therefore reasonable to expect it to continue to do so in the future. Would this not provide us with the justification we seek? To phrase this argument in a more rigorous fashion:

Let us consider a given set of data at t0 (which is at any point in the past), which is explained by some two theories T1 and T2. The two theories are equally satisfactory in all respects except that T1 is the simpler of the two (according to our previous definition of simplicity). After t0, the predictions of T1 have generally proven more in accordance with the data than T2. Consequently, the simpler theory has generally made better predictions. This justifies the belief that simpler theories will continue to do so on future occasions.

Although it is far from clear that a close scrutiny of the history of science will indicate a generally greater predictive success for simpler theories, let us for the sake of argument suppose that it does. Even so, and even if we ignore the philosophical quagmire that is the 'problem of induction', this argument still has major problems. First of all, it does not provide any reasons for why simpler theories have in fact proven better. The fact that simpler theories have proven better in the past may be causally dependent on some other factors that correlate with simplicity. Then there is a much more serious foundational problem. Just as a justification for induction by way of induction fails to be convincing, a justification for a criterion of simplicity which rests on use of the criterion itself will also fail to convince. There are many different ways of drawing inductive inferences from past data concerning the success of different theories throughout the history of science. Inferring that "simpler theories are, other things being equal, generally better than more complex ones" is just one way of many. We might just as well infer that "usually theories formulated by Greeks in the bath, by Englishmen who watch an apple fall to the ground, by Germans who work in patent offices... etc. ..., and which initially appeal to the scientific community, predict better than other theories"11. An inference of this kind, at great length and in great complexity, could be made to account equally well for the historical data available to us concerning scientific theories. We could even go so far as to infer that choosing the most 'simplex' theory has proven best, where 'simplex' is an arbitrary property of our own making which consists in being simple (according to our previous definition) up to t0 but complex after t0, where t0 is some period of time in the future. Of course, inferences of this kind are ridiculous, but they are ridiculous precisely because they are so complicated and contrived. It is the criterion of simplicity itself which makes the inductive inference to a criterion of simplicity considerably more plausible than these other inferences.

We thus run up against the fact that simplicity plays a key role in how we devise inductive arguments. Any argument which seeks to employ empirical justification for simplicity must end up in circularity, relying precisely on that which it seeks to justify. A justification must therefore be sought elsewhere.

VI. Simplicity as an a priori principle

Empirical arguments may be doomed to circularity, but what about wholly logical arguments? Attempts to defend simplicity as an a priori theoretical virtue, popular in the 19th century, have now largely been abandoned by philosophers, but there are still some who defend a criterion of simplicity on purely logical grounds. One such philosopher is Richard Swinburne who maintains that:

"[...] other things being equal -- the simplest hypothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimate a priori epistemic principle that simplicity is evidence for truth." (Swinburne 1997)

In support of his thesis, Swinburne provides an example which I present here in a slighty altered form12: Let us suppose that a series of observations give us the following values for two related variables, x and y:

x:   0  1  2  3  4  5  6
y:   0  2  4  6  8  10 12

We should interpret these numbers as representative of some scientific measurements we have obtained. A formula immediately suggests itself concerning the relationship between the two variables:

A) y = 2x.

This formula yields data in accordance with the evidence obtained so far, and should allow us to extrapolate the value of y based on any value for x, e.g. x = 7 -> y = 14. However, the following formula also satisfies our criterion:

B) y = x7 -21x6 +175x5 -735x4 +1624x3 -1724x2 +722x

If we compare formulas A and B, it is mathematically clear that they account equally well for the evidence while making drastically different projections concerning the value of y for values of x other than the ones obtained hitherto. And formula B is but one of many formulas satisfying the conditions -- it is trivially true that there is an infinite number of competing formulas. Why, then, should we prefer formula A to the others? It may be suggested that we engage in further observations, e.g. observe the value of y when x = 7, and if y = 14, we can simply write off formula B. However, although our observations may have disproved B and an infinite number of similar formulas, an infinite number of other alternative formulas remain, each yielding the evidence to an equally satisfactory degree. We are thus faced with the problem of indeterminacy of theory by data.

For better or worse, Swinburne argues, we get around this by intuitively betting on the simplest hypothesis. In other words, we have a very strong innate bias in its favour: 'if our life depended on predicting the correct value of y for x = 9, we would think it utterly irrational to make any prediction other than y = 18'13, in accordance with the predictions made by formula A. Simplicity considerations, he maintains, are part and parcel of common sense.

We have seen that Swinburne's belief in a criterion of simplicity rests primarily on two arguments. First, there is a reductio ad absurdum based on the indeterminacy of theory by data: There will always be an infinite number of hypotheses to explain the data we have available. While considerations of predictive power and fitness to data may enable us to eliminate any number of proposed hypotheses by acquiring more data, an infinite number of alternatives remain open. And since the amount of hitherto unacquired data is for all practical purposes also infinite, we would end up in absurdity without a criterion of simplicity; we would have no logical method by which to settle on one hypothesis amongst an infinite number of equally data-compliant hypotheses. Secondly, our innate bias towards simplicity needs to be accounted for, and the best way to accomplish this is to suppose that the bias genuinely reflects properties of the world. He concludes that:

"...either science is irrational [in the way it judges theories and predictions probable] or the principle of simplicity is a fundamental synthetic a priori truth." (Swinburne 1997)

But even if we grant Swinburne that we have an innate bias towards simplicity (which is probably true), I fail to see how it can be used to defend the greater antecedent likelihood of simpler theories. This purported bias tells us something about ourselves -- it does not tell us that simpler theories are more likely to predict future experience and describe the world accurately. In other words, Swinburne implicitly assumes that since our cognitive apparatus exhibits a preference for simplicity, it must do so for a reason, and that the reason must be the fact that the world is simple. I do not think that this explanation has any more force than a number of other ones. Indeed, our innate preference for simplicity might very well be completely incidental, or rooted in naturally selected cognitive features conducive to a primitive hunter-gatherer lifestyle. Additionally, the mere fact that something accords with common sense does not, by itself, accord it any likelihood of truth, although it may bias us in its favour.

Swinburne's discussion about the indeterminacy of theory by data aptly demonstrates the usefulness of a criterion of simplicity when it comes to weeding out the abundance of potential hypotheses. But is there really anything irrational, in the final analysis, about simply choosing some arbitrary hypothesis which happens to fit the data? As far as I can tell, Swinburne's argument is highly vulnerable to the skeptic. If we do not find it absurd to be in a position where we have no fixed, logical method by which to determine which is the most likely hypothesis, pending further data, then his argument will fail to sway us. And this purportedly absurd position would, roughly speaking, be the one to which Popper subscribes.

VII. Simplicity as Content

Popper's discussion of simplicity (Popper 1992) suggests that a criterion of simplicity may be justified by his own falsifiability criterion.

"Simple statements [...] are to be prized more highly than less simple ones because they tell us more; because their empirical content is greater; and because they are better testable." [author's italics]14

He suggests that simplicity may be replaced with the notion of greater content, which in turn supposedly yields greater falsifiability. A statement or hypothesis G is simpler than statement or theory H if it applies to a greater number of cases, i.e. if it has more content. The more cases to which it applies, the easier it is to make observations relevant to its truth or falsity. With this definition, Popper hopes to circumvent conventional and arbitrary definitions of simplicity based on practical and aesthetic grounds. His definition has the advantage of explaining why simplicity is considered desirable as a feature of theories, without the need to introduce a priori metaphysical foundations or principles of parsimonious thought to justify it. Simple statements or hypotheses are more valuable than complex ones, quite simply because their empirical content is greater, and because they are more easily testable.

In practice, any given hypothesis which makes a generalization of some sort can be saved from refutation by experience -- it is merely a question of appending a subsidiary hypothesis which explains away the conflicting data in question. Thus, we can save the hypothesis "All ravens are white" by making it more complex. Whenever we observe a black raven, we transform the hypothesis from "All ravens are white" to "All ravens are white except at t0" where t0 refers to the period of time during which we have observed non-white ravens. In the same manner it is possible to circumvent any empirical evidence purportedly falsifying the hypothesis -- but at a cost. In each case the content of the hypothesis decreases, and it therefore applies to fewer cases.

While our example of revising hypotheses concerning the colour of ravens may seem rather contrived, the aforementioned kind of ad hoc revision has its precedents in scientific practice. A classic example is the result of the 1887 Michaelson-Morley experiment, which posed a problem for classical Newtonian mechanics -- its results did not match the predictions of the theory. In Newtonian physics there is the notion of absolute space, which is filled with the luminiferous aether at rest. The ad hoc hypothesis which was supposed to explain away the results of the Michaelson-Morley experiment was the Lorentz-Fitzgerald contraction -- the hypothesis that bodies contract in the direction of the motion when they move through the aether. In that particular case, the Newtonian theory was modified so that a certain description of the theory ceased to apply to objects when they were in motion with respect to absolute space. As in the case of the ravens, appending a subsidiary hypothesis came at the price of simplicity. The greater the number of subsidiary hypotheses and exceptions we append to the original theory, the more complex it becomes. Likewise, the set of possible observations falsifying it shrinks.

Popper's account does have some serious shortcomings. It is fairly clear that hypotheses that are more susceptible to falsification have advantages over their less easily falsifiable counterparts; it is easier for us to discover their falsity (if they are false) and dismiss them from further considerations. This is unquestionably a pragmatic advantage. However, Popper's account fails to provide simplicity with any epistemic relevance. Under his scheme, simple theories have more content. But it then becomes less rational to rely on the predictions of simpler theories, for they are less likely than narrower (and, in Popper's sense, more complex) theories. Why, then, should we choose the simplest theory if we are trying to determine which one is the most likely to be true? Should we take fault with the fact that his account cannot be used to epistemically justify a criterion of simplicity? Popper would no doubt provide us with a skeptical reply, namely that we can have no ultimate assurance that the finest hypotheses or theories currently available to us are true, or even highly probable. They have only withstood our attempts to disprove them hitherto, and may well be disproven in the future. His account is thus purely negative, in the sense that his falsifiability criterion only provides a system for dismissing theories.

There is also another problem with Popper's account. Although falsifiability and content do coincide to a considerable extent, they are not the same, as Swinburne and others have pointed out (Swinburne 1997, Sober 2005). It is true that the statement "All ravens are white" is both more easily falsifiable and has greater content than the statement "All ravens are white except at t0". However, the statement "All ravens are white except at t0, and at t0 all ravens are black" seems to be equal in content to "All ravens are white" while remaining more difficult to falsify. Both statements tell us the colour of all ravens, actual and possible, at time period tn for any value of n, but the latter would require more observations to be falsified -- we would both need to observe the colour of the raven and whether we were in the temporal interval t0 in order to determine its truth or falsity. We thus see that equating simplicity and falsifiability does not really work.

VIII. Sober on Simplicity

Sober's early notion of simplicity (Sober 1975) is similar to Popper's. However, instead of falsifiability, Sober ties simplicity to 'informativeness'. The simpler theory is the more informative theory, in the sense that less information is required in order to answer one's questions. Let us again consider the example of the colour of ravens outlined earlier: "All ravens are white" is a simpler theory than "All ravens are white except at t0" under Sober's scheme, because the information "This is a raven" allows the first theory to answer the question "What colour is it?" while in the case of the latter we must also get the answer to the question "Are we in time interval t0?".

Sober has since rejected this account of simplicity (Sober 1994), as it remains susceptible to the same sort of criticism as that of Popper. It gives us a practical reason to prefer simple theories, but does not accord them any epistemic relevance. Consequently, he has moved toward a more skeptical stance and now expresses views to the effect that simplicity considerations (and considerations of parsimony in particular) do not count unless they reflect something more fundamental. Philosophers, he suggests, may have made the error of hypostatizing simplicity (i.e. endowed it with a sui generis existence), when the fact of the matter is that it has meaning only when embedded in a specific context. As Aristotle pointed out in his criticism of Plato, there is little in common between a good general and a good flute player, although both are 'good' at their respective activities. A similar line of reasoning may be applicable in the case of simplicity considerations. Furthermore, Sober argues that there may very well be 'no such thing as the justification of [a criterion of simplicity]'15 , only myriad justifications for each instance in which it is employed, based on circumstances and background knowledge. If attempts to justify simplicity considerations in terms of something else fail, we must either dismiss them as irrelevant to theory choice, or regard them as an 'ultimate consideration'. Thus we account for our criterion by assuming that simplicity has intrinsic value:

"Just as the question 'why be rational?' may have no non-circular answer, the same may be true of the question 'why should simplicity be considered in evaluating the plausibility of hypotheses?'" (Sober 2001).

This is a rather discouraging conclusion, and seems to be a hopeless last resort. Endowing simplicity considerations with intrinsic value (and thereby laying the question to rest) is, when all is said and done, equivalent to providing no justification for them at all.

IX. Quine on the Maxim of the Simplicity of Nature

Quine professes a taste for 'clear skies' and 'desert landscapes'16. In "On Simple Theories of a Complex World" (Quine 1966) he argues that a preference for simple theories based on the epistemic belief that they are more probable -- what he calls the 'maxim of the simplicity of nature' -- may be traced to four primary causes. First, there is the rather unsatisfying psychological explanation of wishful thinking, or optimism, whereby a systematic bias in favour of simple theories is rooted in extra-logical features of human thought; we hope that the world is simple, and therefore we prefer simple theories. Then there is what Quine refers to as our 'perceptual bias', a 'subjective selectivity that makes us tend to see the simple and miss the complex'. The data we observe are slanted in favour of simple patterns due to contingent features of our perceptual mechanism. Thus, when reading a book, we would immediately notice if a given word was printed in the same place on every line of a page, while we would in all likelihood fail to notice if two pages in sequence had exactly the same number of words. In both cases, we have observable uniformities, but the former is much more likely to be observed due to certain features of human perception.

Quine also speculates that this 'perceptual bias' may extend to how we formulate experimental criteria. As an example, let us suppose we devise some sort of experimental criterion which we proceed to apply, where input A gives output B, input C gives us output D, etc. We thereby map our input values with certain output values. Let us furthermore suppose that the resulting map is challenged, and our critic maintains that the output we received does not reflect a pre-experimental disposition, but rather that it is induced by the order of the experiments themselves. We would try to counter his critique by performing the experiments again in different successions on identical but separate subjects. If we get roughly the same map repeatedly, it is fair to assume that it represents a genuinely pre-experimental pattern. Our experimental criterion may therefore be such that we "either get evidence of uniformity, or nothing". In other words, "the simpler of two hypotheses is sometimes opened to confirmation while its alternative is left inaccessible". There is also the possibility that a simpler theory or hypothesis will be regarded as a rough estimate of sorts, pending further data. Let us suppose, for example, that the relationship between some variables x, y and z is measured such that:

R) x = y / z

but subsequent, more detailed measurements reveal the actual relation to be

S) x = y / z + k

where k is some small constant. These new measurements supersede the old, once they have been thoroughly confirmed. However, we would probably say that S is a refinement of R, rather than an outright refutation and replacement. Similarly, if we were to first measure the constant k as 7.2 and later, in the light of more detailed measurements, as 7.22, we would look upon the latter as a refinement and not a refutation. The measurement 7.2 is ten times likelier to be confirmed than 7.22, quite simply because ten times as much deviation is tolerated. In both of these cases, our 'system of score-keeping' tolerates greater deviation with simpler hypotheses.

While Quine's speculations concerning the causes behind the purported relevance of simplicity considerations are interesting, they do not provide any justification for a criterion of simplicity. By appealing to a primitive sort of psychology we can account for superstition and, similarly, we can account for simplicity considerations with various factors. However, this in no way reflects whether such considerations are warranted. Quine is aware of this and, like Sober, remains skeptical about the possibility of providing an adequate definition of simplicity, much less a global justification. He supposes that simplicity must be relative to the 'texture of a conceptual scheme':

"If the basic concepts of one conceptual schema are the derivative concepts of another, and vice versa, presumably one of two hypotheses could count as simpler for the one scheme and the other for the other. This being so, how can simplicity carry any presumption of objective truth?" (Quine 1966)

To elucidate Quine's point, let us again consider an example mentioned previously: The property 'grue' -- that which is green before time T and blue after time T. Imagine a person accustomed to a conceptual scheme which is different from ours; a conceptual scheme where 'grue' is a basic property (or, in Goodman's terms, 'entrenched'), as is 'bleen' (that which is blue before time T and green after time T). To him, 'green' must be defined in terms of 'grue' and 'bleen' (i.e. 'green' = 'grue before time T and bleen after time T'). His notion of 'green' is dependent upon two other basic properties of his conceptual scheme, just as 'grue' for us is dependent on two basic properties of our conceptual scheme. Would it really be possible to convince him that 'green' is a simpler concept? I would like to think that we can.

If we tie the simplicity of terms to observation, we may be able to evade the problem. In order to determine whether an object is actually 'grue', we need to observe it both before and after time T, while the property 'green' is immediately apprehended by an observer. If we extend our criterion of simplicity by appending to it the maxim that underlying unobservable properties should not be postulated unless they are strictly necessary to the theory, we can argue that calling anything 'grue' without observing it at both pre-T and post-T is unwarranted. By this argument we might get a person with a conceptual scheme where 'green' is defined in terms of 'grue' and 'bleen' to agree that 'grue before time T and bleen after time T' is a simpler concept than both 'grue' and 'bleen'.

X. Simplicity Relative to Cognition

For my part, I am inclined to believe that Quine is on to something when he suggests that much of what we characterise as 'simple', as opposed to 'complex', may in fact be made so by virtue of contingent features of our perceptual mechanisms. However, I would like to go even further and suggest that it may also be made so by features of our cognitive apparatus, i.e. that our appraisal of simplicity is at least partly based on properties of cognition peculiar to human beings. Thus, our categorisation of some theories as simple and others as complex may not reflect any objective characteristics of the theories in question, but rather the effectiveness with which our minds grasp and employ them.

It does not require a great feat of the imagination to contemplate the notion of an alien non-human intelligent species which engages in scientific observation and theorising about the world it inhabits. Let us call this hypothetical species Martians. Assuming that we could effectively communicate with them, would Martian philosophers or scientists agree with us about which theories are simple and which are not? Would there be any common ground at all? Possibly not. Our cognitive organ, the brain, is the product of evolution -- naturally selected for its ability to solve problems and carry out tasks conducive to survival in a particular sort of environment. It is reasonable to expect it to have specialised in facilitating a set of vital and common tasks, such as understanding language and detecting certain kinds of patterns. Indeed, there is plenty of empirical evidence which indicates that this is the case17. Thus, one might venture to say that some of the theories which we characterise as simpler are simpler to us than others in virtue of the fact that our brains are constituted in a particular way. There is no reason to suppose that cognitive organs that have evolved under very different circumstances would characterise simplicity in the same way.

This leads to the question of whether simplicity can be tied to the notion of computational expense (i.e. requirements in terms of processing time, processing ability and energy expenditure). On a typical modern silicon computing device the square root operation is computationally expensive when compared to other basic mathematical operations such as multiplication or division. However, it is possible to construct a device with a dedicated processing unit custom-tailored to handle square root operations efficiently -- these dedicated units make square root calculations computationally cheap. Does it make sense to say that square root operations are 'simpler' for the former than for the latter? As far as we are justified in anthropomorphising a computer, I would say that it does.

Without wishing to delve into metaphysically dubious territory, we can perform a little thought experiment to illuminate the matter. Let us suppose, for the sake of argument, that a powerful modern silicon computing device attained something akin to consciousness -- This 'intelligence' would find supposedly complex differential equations quite simple to solve, while other operations, such as pattern detection in optical input or basic speech recognition, simple enough for most humans, would be very complex and prone to failure. This is, to a certain extent, a question of algorithmic implementation, but the fact remains that the computational mechanism of a silicon computer is ill suited to many tasks that humans seem to deal quite well with. From this, we may deduce that different instances of computing devices are good at different tasks.

A critic might object to the comparison of human thought to computation. The question of whether thought equals computation is, after all, still open. However, our example above works just as well if we exchange the silicon computer for a hypothetical Martian. Our critic might also point out that by tying simplicity to the notion of computational expense, we confuse difficulty and complexity (and, correspondingly, easiness and simplicity). But is it possible to conceive of theories that are at the same both simple and difficult, or complex, yet easy? Perhaps -- it is difficult to say. In any case, the notions are certainly interrelated to such an extent that it reasonable to speculate that simplicity may be 'in the eye of the beholder', relative to the computational specialisation of the inquirer's cognitive apparatus.

If any of these conjectures are valid -- if simplicity considerations are wholly or even partially relative to cognition -- it is difficult to see how an epistemic justification for a criterion of simplicity is to be sought. Why should that which is simpler for a human brain be an indicator of truth or likelihood? However, it is easy to see how we might wish our theories and hypotheses to describe the world in a way that is comfortable and easy for our cognitive apparatus. Thus, we are again pushed in the direction of practical, methodological justification.

XI. Conclusion

What are we to make of the criterion of simplicity? We have seen that attempts to justify it epistemically by way of induction are bound to end up in circularity, since they must eventually rely on a criterion of simplicity themselves. Presenting the criterion as an a priori principle of reasoning, however, is only convincing to the extent we are disposed to accept it as such. Finally, there is also the very real possibility that considerations of simplicity depend on the properties of the cognitive apparatus employed by the inquirer. This paints a rather bleak picture of our prospects.

There is certainly an abundance of explanations for why we do in fact apply a criterion of simplicity, but potential justifications for it in terms of likelihood are fraught with difficulties that do not seem to be amenable to any clear-cut, satisfactory solutions. Considering the aforementioned arguments, I do not think that any of them succeed in providing a compelling epistemic justification for a criterion of simplicity. Furthermore, while appealing to aesthetics or practicality of application may provide us with a methodological justification of sorts, it remains relative to the characteristics of the inquirer, and cannot be used to pin down simplicity as an objective, universal theoretical virtue. Perhaps we must rest satisfied with that.

Reykjavík, May 2006
Signature
Sveinbjorn Thordarson






Endnotes

1. Original Latin: "Nunquam ponenda est pluralitas sine necessitate". Quoted in Thornburn 1918, also Ockham 1990.
2. Quoted in Maurer 1984.
3. Original Latin: "Entium varietates non temere esse minuendas". Vide Kant 1950.
4. Vide e.g. Peirce 1992, Sober 2001.
5. For a discussion on this, see Thornburn 1918. Many formulations of the razor that have been attributed to Ockham do not appear in any of his extant writings.
6. Vide Goodman 1965. It should be noted that Goodman thinks this is a question of "entrenchment". If a certain language community were accustomed to using 'grue', the term 'green' might be defined from the term 'grue' e.g. green = 'that which is grue before time T'. 'Grue' might therefore be seen as the basic concept, from which 'green' is derived. I discuss this further at a later point.
7. For further discussion on this topic, see Smart 1984
8. Vide Aquinas 1945
9. Mathematicians will, as an example, sometimes speak of a syntactically elegant proof as being 'beautiful'.
10. Vide "The Fixation of Belief" in Peirce 1992 p. 109-123
11. I borrow this particular example from Swinburne 1997. According to the myth, Archimedes formulated his law in the bath and Newton was supposedly inspired by watching an apple drop from a tree. Einstein worked in a patent office in his youth.
12. My modification consists of altering Swinburne's formulation of the alternate, complex formula (B) in such a way as to stress the potential complexity of alternative formulas available yielding the data without discussing fillings of a third variable z. Swinburne's original alternative formula was 'y = 2x + x(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)z'.
13. Vide Swinburne 1997
14. Vide Chapter 7 "Simplicity", in Popper 1992
15. Vide Sober 2001
16. Vide "On what there is" in Quine 1953 p. 1-13
17. Vide Pinker 1994



References

Aquinas, T. (1945): Basic Writings of St. Thomas Aquinas, trans. A.C. Pegis, New York: Random House.

Aristotle (2001): The Basic Works of Aristotle Ed. Richard McKeon, New York: Modern Library.

Baker, Alan (2004): "Simplicity." Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/simplicity/

Goodman, Nelson (1983): Fact, Fiction and Forecast 4th ed., Cambridge, Massachussets: Harvard University Press.

Hesse, M. (1967): "Simplicity." The Encyclopedia of Philosophy. Vol. 7, pp. 445-448. New York: Collier Macmillan.

Kant, Immanuel (1950): The Critique of Pure Reason, transl. Kemp Smith, London. Available at the following Uniform Resource Locator: http://www.hkbu.edu.hk/~ppp/cpr/toc.html

Kripke, Saul (1982): Wittgenstein on Rules and Private Language, Cambridge, Massachusetts: Harvard University Press.

Maurer, A. (1984) "Ockham's Razor and Chatton's Anti-Razor", Mediaeval Studies, 46, 463-475

Ockham, William of (1990): Philosophical Writings, ed. & transl. Philotheus Boehner, Indianapolis: Hackett Publishing Co.

Peirce, C. S. (1992): "The Fixation of Belief", in The Essential Peirce, Indianapolis: Indiana University Press.

Pinker, Steven (1994): The Language Instinct, England: Penguin Books Ltd.

Popper, Karl (1992): The Logic of Scientific Discovery, London: Routledge.

Quine, W.V. (1953): "On What There Is", in From a Logical Point of View, New York: Harper & Row.

Quine, W.V. (1966): "On Simple Theories of a Complex World", in The Ways of Paradox and other essays, Cambridge, Massachusetts: Harvard University Press.

Smart, J. J. C. (1984) "Ockham's Razor" in Principles of Philosophical Reasoning, New Jersey: Rowman & Allanheld 118-131.

Sober, E. (1975): Simplicity, Oxford University Press.

Sober, E. (1994) "Let's Razor Ockham's Razor", in From A Biological Point of View, Cambridge: Cambridge University Press 136--157.

Sober, E. (2001) "What is the Problem of Simplicity?" in Zellner et al. (eds.) (2001), 13-31. http://philosophy.wisc.edu/sober/TILBURG.pdf.

Sober, E. (2005) "Parsimony" http://philosophy.wisc.edu/sober/pars.pdf

Swinburne, R. (1997): Simplicity as Evidence for Truth, Milwaukee, Wisconsin: Marquette University Press.

Thornburn, W. (1918) "The Myth of Occam's Razor", Mind, 27, 345-53.